GenAI – Watermark | Copyright May Just Apply to Mainstream Internet

Srikanth
4 Min Read
GenAI - Watermark | Copyright May Just Apply to Mainstream Internet 1

—By David Stephen

There is a recent story on Nikkei AsiaNikon, Sony and Canon fight AI fakes with new camera tech, stating that, “Nikon, Sony Group and Canon are developing camera technology that embeds digital signatures in images so that they can be distinguished from increasingly sophisticated fakes. An alliance of global news organizations, technology companies and camera makers has launched a web-based tool called Verify for checking images free of charge. If an image has a digital signature, the site displays date, location and other credentials.”

This comes along the copyright infringement lawsuit by the NYTimes against OpenAI and Microsoft. Watermark and copyright approaches would likely work for established companies and major scenarios. They may not apply to situations outside of the spotlight or reputable teams. These efforts at the center stage are great for trust and protection. They, however, may repeat an error, earlier in the internet era.

The music, movie and cable industries won some lawsuits against websites a few years ago, which was great but deficient elsewhere. The property of the internet for scale, as digital, not physical—which is harder with transport and easier to prosecute wrongs—made what content owners did not authorize, possible, resulting in significant losses for them.

Companies whose focus was not technology but had technology developed for their industries could not define how to ensure protections, in ways that might be possible to track contents, but had to continue to catch upsometimes losing.

Generative AI is not just the internet or digital, it is more like a reproducer within digital. There are sources on the internet and events, like a major election, where alerts about fakes and caution would be significant, but there are chat apps, groups on social media, emails and so forth where AI fakes may sprawl.

Although fact check websites have been great, there is also a tendency that in refuting some stories, images or videos, people get to hear or see them, or ideas may start to take shape on what something might be—especially because of polarization along different lines.

Also, while there are top websites and sources, there are opportunities for different voices in comments, where allusions can be made to fake information. There are also major sources on different sides of the spectrums where steam may gather.

LLMs would advance as uploads continue and processing power grows. This means that while watermarks and copyrights may have some effect, improvements to AI, especially adaptable by sources outside the mainstream may mean that the need to use fake images may not be necessary, or there could be an appendage on an original that appears at some point and disappears, gaming the watermarks.

Generative AI safety with strategies solely for mainstream may result in worse outcomes than digital piracy, more than can be containedas digital savvy is not as rampant as digital access.

TAGGED:
Share This Article
Passionate Tech Blogger on Emerging Technologies, which brings revolutionary changes to the People life.., Interested to explore latest Gadgets, Saas Programs
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *