AI-Generated Ad Copies Help Enterprises Catch Eyeballs!

Jan 08, 2020

AI-Generated Ad Copies Help Enterprises Catch Eyeballs!


Online digital advertisements are important global sales tools for enterprises. However, due to the high cost and competition, many companies put a lot of time and resources only to get little return. The solve this problem, the project “Content Generation Techniques and Platform for Na-tive Advertisement based on AI and Deep Learning,” which is led by Professor Shan-Hung Wu, Yi-Wen Liu, and Cheng-Shang Chang from National Tsing Hua University and funded by Minis-try of Science and Technology (MOST), uses Deep Learning technologies combined with Signal Processing and Social Media Analysis techniques to generate various, catchy, and trendy ad cop-ies, aiming to reduce the cost and time of ad generation and help companies expand their sales on the global market more easily.


Today, people see ads in many (free) websites and mobile apps. The on-line advertisement plat-forms, such as Google Ads and Facebook Ads, have become an important sales/marketing chan-nel for many businesses. However, these platforms only push ads to the targeted audience. It is the responsibility of the advertisers (businesses) themselves to create relevant, creative ads to attract clicks. Currently, most ads are created by humans because writing a good ad copy needs to take into account many considerations. It takes a lot of time and money to create a large num-ber of ads for the global market.


To lower the barrier, the professors Shan-Hung Wu, Yi-Wen Liu, and Cheng-Shang Chang from the Department of Computer Science and Electronic Engineering at National Tsing Hua Univer-sity lead an interdisciplinary research team to develop a series of AI techniques that can automat-ically generate ad drafts. With these techniques, an advertiser only needs to provide the infor-mation of the product (such as the images or description) and context (such as the targeted dis-play pages), then the AI generator can automatically generate a lot of ad drafts that match the semantics of the context. Businesses can easily get inspirations from these drafts and, by editing the drafts instead of creating ones from scratch, save time and cost of ad creation.


This project is grounded on solid research. The team has published 19 papers in top-tier interna-tional journals/conferences, including NeurIPS [1] and IEEE ICASSP [2] (which are the most prestigious international conference on deep learning and audio signal processing, respectively, and are both ranked top 5 in the entire Computer Science field on Microsoft Academic Confer-ence Ranking). The team studies the problem of region-semantics preserving image synthesis, that is, given a reference (product) image and a region specification, the goal is to generate real-istic and diverse (ad) images, each preserving the same semantics as that of the reference image within the specified region. The team also studies a semantic conditional generation of (ad) speech and music based on the given lyrics and melody using a newly collected dataset that con-tains hundreds of songs.


The research team is also cooperating with some industry players such as ASUS, United Microe-lectronics Corporation, KKBox, and startup companies. In particular, the team helped a startup company called AppFinca Inc. promote its product “Flora” and get the product ranked No. 1 on Taiwan’s Apple App Store under the Free Productively Tools category and beat other famous apps such as Google Gmail. The team also helped push the Flora app up to the 6th position in the ranking of the Free Productively Tools on the UK’s App Store. Flora is a mobile app that helps people stay away from their phones when studying or at work.


As the next step, the researchers are finding a way to discover trending, influential topics by ex-ploring social networks so that the machine can generate an ad with more impact. They also plan to integrate their techniques into a unified API or system so the enterprises can more easily get what they need in order to fight on the global market.


Reference to International Publications:

[1] Wei-Da Chen and Shan-Hung Wu, "CNN^2: Viewpoint Generalization via a Binocular Vi-sion," in Proc. of the 33rd Conference on Neural Information Processing Systems (NeurIPS), December 2019.

[2] Bang-Yin Chen, Tzu-Chi Liu, and Y.-W. Liu* (2019). “Stereo source separation in the fre-quency domain: Solving the permutation problem by a sliding k-means method,” in Proc. of IEEE Int. Conf. Acoust., Speech, & Signal Process. (ICASSP), May 2019.