Google Launches Watermarking Tool For AI-Generated Images

Share

You know how computers can make pictures and artworks, right? Well, Google has this special lab called DeepMind that’s really good at making computers super smart. And they teamed up with Google Cloud, which is like a big computer service, to create something awesome.

So, they made a tool, like a digital marker, called SynthID. Think you have a drawing that a computer made. SynthID puts a secret mark on it, like a hidden code, that regular human eyes can’t see. But guess what? Another computer with special eyes can spot this hidden mark. This hidden mark helps us figure out if the drawing was made by a specific computer program called Imagen.

Now, let’s talk about how this works. They used two different computer programs to do this trick. The first program changes the drawing in a really tiny way, so tiny that even if we looked closely, we wouldn’t notice the change. Then, the second program acts like a detective. It checks the drawing and says, “Hey, I found that super small change!” This is how it knows if the drawing is from Imagen.

What’s even cooler is that even if someone tries to change the drawing’s size or use fancy filters like the ones on Instagram, that hidden mark stays right there. It’s like a super sneaky mark that doesn’t go away easily.

So, when we want to know if a drawing is from Imagen and has this hidden mark, we use the SynthID tool. And the tool tells us three things. First, it might say, “Yes, for sure, this drawing is from the Imagen program.” Second, it might say, “I didn’t find any hidden mark here.” And third, it might say, “Hmm, I think there could be a hidden mark, so this drawing might not be what it seems.”

This tool is pretty handy because sometimes we want to know if a picture was made by a computer or a real person. It helps us be honest and know where things come from, which is really important, especially when we look at pictures and art.

Watermarks are just one of the ways we use to figure out where pictures come from and if they’re real. Another way is by looking at something called “metadata.” This is like extra information that’s attached to a picture, like a tag. But the problem is, this extra information can be taken off or changed, so it’s not always reliable to know if a picture is really what it claims to be.

Now, let’s talk about something called “generative AI.” It’s like a type of computer magic that makes pictures and art. This AI stuff is amazing and has given us lots of cool things to enjoy. But here’s the thing: it can also be used in not-so-nice ways. These AI programs can create pictures that look just like real photos or even make things that don’t exist in real life.

People have started sharing fake pictures of important people made by AI on places like Facebook and Instagram. These pictures are getting really tricky to tell apart from real ones. For example, there was a picture of the Pope wearing a puffy jacket that was made by an AI program. It caused a lot of confusion, even though it wasn’t meant to harm anyone.

The researchers at Google DeepMind, the smart people behind all of this, are saying that being able to spot pictures made by AI is super important. It helps us know when we’re looking at something that wasn’t created by a real person. This is crucial to stop wrong information from spreading around. They want us to have the power to tell the difference between real and AI-made stuff, so we can make better choices and not be fooled by tricky pictures.

The clever people who made these tools said they’ve tried them out with many different types of pictures. They wanted to make sure the tools worked well with all sorts of images. They did a bunch of testing to get them ready for lots of different situations.

They made sure the tools could do two important things. First, they checked if the tools could still find the hidden mark even when the picture was changed a bit. Like if you took a picture and then made it smaller or bigger, the tools should still find that hidden mark.

Image Source – Google DeepMind

The second thing they did was to make sure the hidden mark blends really well with the original picture. So, even though it’s there, you wouldn’t notice it easily. It’s like making the mark hide even better in the picture.

Basically, they worked hard to make sure the tools are really good at finding the hidden mark, no matter what kind of picture it is. This way, we can trust that the tools will do a good job in lots of different situations.

“This is a significant announcement by Google,” Arun Chandrasekaran, distinguished vice president analyst at Gartner, told SiliconANGLE.

Clients that use Google’s text to image diffusion model, Imagen, now have a choice of adding watermark. Given the rise of deepfakes and increasing regulations across the globe, watermarking is an important step in combating deepfakes.

You’ve got it! The smart people who made these tools wanted to make sure they would work with all kinds of pictures. So, they tested the tools with many different types of images to be sure they’re reliable.

They had two big things in mind. First, they wanted the tools to be like detectives that can spot the hidden mark, even if the picture changes a little bit. So, even if you make the picture smaller or bigger, the tools should still find that secret mark.

The second thing they focused on was making sure the hidden mark looks like it’s part of the original picture. It’s like they wanted the mark to be like a chameleon, blending in so well that we wouldn’t even notice it’s there.

In a nutshell, they put in a lot of effort to make sure these tools are really good at finding the secret mark, no matter what kind of picture it is. This way, we can have confidence that the tools will do a great job in all sorts of different situations.

Google is one of seven big tech companies that promised to be careful with AI, and this happened because of a special effort started by the White House in July. They want to make sure AI is used safely. Google is also part of a group called the Frontier Model Forum. This group includes Microsoft, the people behind ChatGPT (that’s me!), and a startup called Anthropic. They all want to make AI in a way that’s safe and responsible.

This is all happening at a time when the European Union, which is like a group of countries in Europe, is getting ready to make rules called the “AI Act.” These rules are meant to keep AI safe and good to use in all the countries that are part of the European Union. So, Google and others are taking steps to make sure AI is used safely and responsibly, and this is happening in different parts of the world.

Chandrasekaran said it’s still be a “wait and see” situation when it comes to the robustness of the watermark technology that DeepMind has produced, which the researchers themselves warned is not foolproof against all types of image manipulation. “Also, the watermark is specific to Google’s model and hopefully the technology companies will collaborate on standards that work across AI models,” Chandrasekaran added.

“We hope our SynthID technology can work together with a broad range of solutions for creators and users across society, and we’re continuing to evolve SynthID by gathering feedback from users, enhancing its capabilities, and exploring new features,” the researchers said.

Read more

Recommended For You