...

Good ‘actors,’ bad actors in AI: Travel Weekly

Jimmy Besiada

Jimmy Besiada

In the recent World Travel and Tourism Council (WTTC). Webinartwo speakers – Amy and Aidan – summarized some of the group’s recent research, conducted in collaboration with Microsoft, on AI in travel.

“AI is no longer a futuristic concept,” Aimee said. “It is a reality today that can transform our industry in exciting and wonderful ways. Imagine being able to improve your business processes and revolutionize the way you market, sell and promote your tourism destinations.”

I talked about Aidan Recent innovations in generative artificial intelligence.

“The future of travel and tourism is bright, and artificial intelligence is the key to unlocking a new world of possibilities,” Aidan said.

A few paragraphs ago, I called Amy and Aidan “talkers.” That was a deliberate choice of wording: Nor are they human. It was the generative AI products “they” were talking about, created by WTTC’s Director of Travel Transformation, James MacDonald. He called them “AI-mee” and “AI-dan”.

To create it, McDonald uploaded WTTC AI reports to the AI ​​assistant. He was asked to create a summary and a two-minute text that included his main points. MacDonald then asked the AI ​​to create image prompts linked to the text, and fed them into the AI’s image generator. He then fed text and images into a speech generator, and AI-mee and AI-dan were born.

It was a great video. I thought it was pretty obvious that AI-mee and AI-dan are creations of artificial intelligence; The technology needed to master human behaviors in the digital environment is not perfect. Until now.

As generative AI improves at things like imitating voices or even videos, the opportunities — and threat — it poses loom larger.

To be clear, MacDonald wasn’t trying to fool anyone with AI-mee and AI-dan; Tell us exactly how he created it before playing his video demo.

However, the idea that a bad actor could use generative AI, for example, to perpetuate fraud via deepfakes is frightening. (A deepfake is any piece of content, including photos, videos, and images, that is created to impersonate humans.)

I haven’t heard of any AI attacks on agencies yet, but this is likely only a matter of time. In a report last year, the Bank of America Institute called deepfakes “one of the most effective and dangerous disinformation tools” and noted that deepfakes that imitate executives are already being used to target some organizations.

Travel agencies are frequent targets of scammers. ARC keeps a page updated with the latest attempts, such as a scammer claiming to be Saber sending an email asking advisors to click a link to log into the GDS. Doing so gives the agent’s Saber login information directly to the scammer.

For deepfakes, specifically, the Bank of America Institute recommends education first. It also recommends using cybersecurity best practices and enhancing verification and verification protocols.

The report also offers some practical advice for spotting deepfakes, at least for now: It will get better over time. Deep fake audio can include pauses between words or sentences that sound longer than normal, and voices can sound flat (“if it’s low, it probably is”). For video, look for poor lip sync, long periods without blinking, blurring around jaw lines and patchy skin tone.

Be vigilant. AI-mee and AI-dan from WTTC were friendly presenters and made great use of technology. But it may not be long before nefarious deepfakes come along.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button

Adblock Detected

PLZ DISABLE YOUR ADBLOCK AND REFRESH THE PAGE