Because images and videos can now be produced with artificial intelligence, it’s increasingly difficult to determine which are real and which are fake. One approach has been to look for “tells” that reveal an image is AI-generated. However, as AI continues to improve, this approach creates an arms race between AI photo generation and AI detection, a race that AI detection is unlikely to win.
In this presentation, you’ll learn:
- How the traditional archival process can help
- How cryptographic hashes can prevent tampering
- Why a timestamping service regularly printed data in the New York Times classified section (!)
- What content-addressed storage is
- How digital signatures can help investigators “go to the source” even if the data is sent by an untrusted third-party
What's the focus of your work these days?
Now that the major hype around blockchains seems to be over, and all the trendy people have moved on to AI, I think there’s an opportunity to take some parts of blockchain technology – reliable, cheap, and useful things like cryptographic hashes, digital signatures, and timestamping services – and apply them to everyday life. In many cases, these cryptographic tools are already being used under the hood, without the general public being aware of it. For instance, some credit cards use digital signatures to prove that the credit card chip was present at the terminal. And in software, digital signatures and hashes are used to guarantee that code hasn’t been tampered with in the download process. These tools are very low-cost and can be used now. But there are many problems in everyday life that aren’t using cryptography, but which would greatly benefit from it. Bridging the gap between cryptography and everyday business problems has been the focus of my most recent work.
What's the motivation for your talk at QCon San Francisco 2023?
One of my most recent clients, Starling Lab, which was co-founded by the Stanford Department of Electrical Engineering and the USC Shoah Foundation, has been working on the problem of image verification: how do we know which images are real, and which are fake?
Image verification has become an increasingly important societal problem due to the rise of AI and citizen journalism. Starling Lab uses an archival approach, meaning that rather than looking for the “tells” of AI in an image, the Lab encourages journalists, news organizations, and open source investigators to record information about images that are believed to be valid. For instance, a photographer in the field might use their cell phone to digitally sign and timestamp a photograph that they just took. The motivation for my talk was to share both the fascinating problem of image verification and the cutting-edge technology that we can use to combat fake images.
How would you describe your main persona and target audience for this session?
Prior knowledge of cryptography isn’t assumed, but people who are already familiar with cryptography basics will be intrigued by the image verification use case.
Is there anything specific that you'd like people to walk away with after watching your session?
What is something interesting thing that you learned from a previous QCon?
One of my favorite talks at QCon was by a software engineer who was writing his own smartwatch applications. As time goes on and we use more devices (electronic pacemakers, for example), it’s very important to me that we be able to modify our devices and maintain them as we see fit.
Software Engineer and Consultant, Previously Lead Engineer on the Zoe Smart Contract Framework at @Agoric
Kate Sills is a software engineer and consultant specializing in applied cryptography. She was previously the lead engineer on Agoric’s smart contract framework (Zoe) which protects users from malicious contracts. Previously, Kate has researched and written on the potential uses of smart contracts to enforce agreements and create new forms of institutions apart from traditional legal jurisdictions. In 2022 and 2023, she served as a member of the technical review committee for the Centre for Computational Law in Singapore. Kate graduated from the University of California at Berkeley with degrees in Computer Science and Cognitive Science.