Notable AI images and drawings have been around since 1973, but they have not been as realistic and easy to make as they are today. With anyone having the ability to use AI for the creation of realistic art they also can use this technology to ruin someone’s reputation. Today most AI images are of celebrities or politicians but they could be used for anyone causing some people to worry. These worries have led some companies like Sony, Nikon and Canon to bring out support for Content Authenticity Initiative or CAI. For now Lecia is the only company to have the CAI digital signature system but Sony has announced that it would bring the technology to the upcoming a9 III as well as the Alpha 1 and a7S III.
The CAI verification system is not the only tool to combat AI images as a web application has been made that is able to show the provenance of an image if it has been embedded with the correct digital signature. Canon is also reportedly building an image management application that can tell whether images were actually taken by humans or if they were made with AI. Nikon has announced that they are planning on integrating an image provenance function into its Z9. The function will support the authenticity of images by attaching information to include its sources and providence which will be clickable to access and hide the info.
According to PetaPixel “The Verify system is not designed to detect an AI-generated image, although it could at least be able to verify if an image has the CAI’s digital signature attached, which would not be present in a fabricated image. “. Although there are systems that should be able to detect if an image has the CAI’s digital signature this promises a future full of real images that may be impossible to achieve. The Majority of photos taken will still be without the CAI signature leaving only theirs with a form of confirmation. When the CAI was originally made it was meant for news companies to not post photoshop images but this has the positive side effect of also allowing it to detect AI images. The CAI still will most likely only be used for new companies such as The New York Times where they could use the technology to avoid posting AI or photoshopped images. Another major point of importance is that CAI will not be used in social media such as X, instagram or meta because the companies are not interested in adding this information as of now.
PetaPixel believes that they can fight misinformation at scale by adding CAI signature to images and by telling the viewer to be skeptical of any images that don’t have it. Those at PetaPixel know the average user won’t bother to investigate which is why they hope to make accessing the information with the same effort as it would take to check a food label. This technology is not perfect and still has a ways to go, but this technology could change the way people view images and more importantly confirm or deny the authenticity of an image preventing harm in the form of misinformation and slander.