Wednesday, May 15, 2024

Celebrity Deepfakes Vs. Deepfake Celebrities And Valid Vs. Real Media


NEW YORK, NEW YORK - MAY 06: FKA Twigs attends The 2024 Met Gala Celebrating "Sleeping Beauties: ... [+] Reawakening Fashion" at The Metropolitan Museum of Art on May 06, 2024 in New York City. (Photo by Jeff Kravitz/FilmMagic)

FilmMagic

I'm sorry to say that until last week I'd never heard of my new favourite British pop star FKA Twigs, but I was fascinated by her testimony to a US Senate Judiciary subcommittee that she has developed her own deepfake version of herself (trained in her personality and able to speak in French, Korean and Japanese) so that she can leave the bots to interact with journalists and her many fans while she focuses on her music. This seems a very farsighted and innovative use of the technology. She says that in an age that demands a lot of press interaction and promotional work, this will free her up to "spend more time making art". But I do wonder how the fans and journalists will know that t hey are looking at the real fake FKA Twigs and not a fake fake FKA Twigs?

Make It Until You Can Fake It

The whole real/fake thing can get very confusing. It can obviously be quite difficult to tell a real fake fake celebrity from a celebrity fake, as evidenced by the fact that American pop singer Katy Perry's mother was fooled by a picture of the star also attending the Met Gala (an annual fundraising gala held for the benefit of the Metropolitan Museum of Art's Costume Institute in Manhattan) in a billowing floral gown. On Instagram, Ms. Perry shared a screen grab of the text message from her mother saying "What a gorgeous gown, you look like the Rose Parade, you are your own float lol" and her responding "lol mom the AI got you too, BEWARE!".

My legions of adoring fans, friends and family will soon face a similar problem. Much like FKA Twigs, I have a deepfake version of myself up and running so that I can focus more on my writing, fantasy soccer team and proceeding with an AI-powered reorganisation of my sock drawer (I'm joking of course, there is no app for this yet, although I sure it is only a matter of time). Acolytes can now visit DaveGPT without interrupting my commune with muses. But how will a journalist, for example, know that the DaveGPT that they are asking about implementation options for central bank digital currency is the real DaveGPT and not a bot operated by agen ts of a foreign power dedicated to the downfall of our democracy?

Fake fakes.

© Helen Holmes (2024).

This is a serious problem. Speaking on the Andressen-Horowitz YouTube channel recently, noted venture capitalist Marc Andreessen said that "Detecting deepfakes is not going to work because AI is already too good, so the solution is to certify content as real". He is absolutely correct. I've suggested before that we need to start setting the default on browsers and devices to not display any text, image or video that does not have a valid digital signature. So if you are looking at a video of Joe Biden playing table tennis with Vladimir Putin, you will at least know who created it.

You might wonder how Instagram or Facebook or WhatsApp could know that it was Dave's Photoshop or the BBC or FKA Twigs that produced the image. Well, that's easy. Photoshop could be updated to write the digital signature into the image metadata by default. So when I save an image, Photoshop computes a hash across the image and the metadata and then encrypts it using my private key. Instagram could then recompute the hash and follow a link to my public key and use it decrypt the metadata: if the hashes match, it's my image and it's not been manipulated.

Since the public key of the FKA Twigs would be known to the, well, public, it would be easy for anyone to check the signature and see that the content came from the her and had not been altered.

(There is an immediate opportunity here for someone - banks? - to provide the digital wallets that can safely utilise the necessary keys and certificates without having the public need to know anything about keys and certificates.)

Now if someone takes clip from an FKA Twigs video and edits to get an extract, the extract will no longer have her digital signature. Suppose it is the BBC doing the editing. Then when the BBC save the edited clip it will now have a new digital signature computed over it and signed using a BBC private key. Again the public key of the BBC is public and anyone can now verify that this clip did indeed come from the BBC and not from FKA Twigs.

If that all seems a little futuristic. note that TikTok has just announced that it plans to start labelling AI-generated images and video using a digital watermark known as Content Credentials. The Content Credentials technology was spearheaded by the Coalition for Content Provenance and Authenticity, a group co-founded by Adobe Adobe , Microsoft Microsoft and others. It has already been adopted by the likes of ChatGPT creator OpenAI. YouTube and Meta have also said they plan to use it. If you use OpenAI's Dall-E tool to generate an image, OpenAI attaches a watermark to the resulting image and adds data to the file (using hashes and digital signatures as mentioned earlier) that can later indicate whether it has been tampered with. If that marked image is then uploaded to TikTok, it will be automatically labeled as AI-generated.

Labels Are Not Enough

Is it enough to (voluntarily) label AI content though? The internet is drowning in "botshit". As the writer Cory Doctorow points out, this botshit can be produced at"a scale and velocity that beggars the imagination". He highlights Amazon's Amazon decision to cap the number of self-published "books" that an author can submit to a mere three per day! Given the tidal wav e of AI-generated nonsense being uploaded — including "books" about King Charles' cancer and so on — we need to go further than encouraging tool makers to label AI content.

Labelling AI content isn't enough. On the contrary, we should assume that everything is botshit unless presented with cryptographic evidence that it was produced by a specific person or organisation, even if we do not know (or care) who that person is! Marc Andreessen is right to point to digital signatures as the way forward and this means, as I am fond of repeating, that the IS-A-PERSON credential will be more valuable than ever.

No comments:

Post a Comment