I'm lucky enough to not be stuck in one bubble, but be part of many different sectors, and one of those is definitely AI - my career and one of my passions. So when people started to ping me about recent scary headlines, that really got me the wrong way.

So what happened? Two new innovations came out last week, one from NVIDIA, the other one from OpenAI.

This Person Does Not Exist

A new product popped up this week: https://thispersondoesnotexist.com/ and the way I got pinged about this is vastly different. My boss said: Look, the new GANs look amazing. My mentions on Twitter looked a little different though: Is this a new kind of AI, does this have something to do with Deep Fakes?

I'm a big fan of GANs, I work with them when trying to generate synthetic fashion designs. One of the only differences between my pixely and scrappy looking systems and This Person Does Not Exist is time and resources.

NVIDIA - shoutout to my former employer - is probably one of the most exciting companies when it comes to this kind of tech. Not entirely because their researchers are amazing, but AI/ML is powered by GPU Processors, specifically NVIDIA ones, and they have more than enough of them.

How long it took to train this network, using $10k GPUs

NVIDIA also released a system called ProGAN last year, where the results are similar. The big innovation here isn't the super high-res faces: It's the fact that with this new architecture called StyleGAN, they are able to control how a face looks like. I'm excited about this, and you shouldn't be scared about it! If you want to know how a GAN works, I wrote about it.

The Dangerous Text Generator

This was interesting. OpenAI is a NPO focused on AI research and getting to the "Artificial General Intelligence". I really admire their work, I worked with their simulation system "OpenAI Gym" for some while, I like their work on AI Policy and their papers have helped me a lot.

Their contributors include Elon Musk, Sam Altman, Peter Thiel and Greg Brockman, so you can be sure that they rake in some great headlines.

This week, they pushed a new Natural Language Processing model that is generating some great text samples, based on a human writing prompt.

And then comes the big error: Instead of - as usual - release this to the public, they labelled it as "too dangerous to release". That didn't just rob researcher the wrong way, but was welcomed food for the press. It sparked headlines like "Elon Musk created an AI that is too dangerous to release."

Now - personally, I think their reasoning is plausible. The text generator is great, so somebody could start a bunch of twitter bots, set "Hillary is a crook" as writing prompt and spam the web even more than it already gets spammed.

But the narrative of "too dangerous to release" suggests that this is some kind of new alien tech - did we finally enter Singularity? When all it really was, is a security measure to prevent some spam bots.

The other controversial thing is that this publication didn't go out to researchers first, it went out to journalist and reporters. So without independent researchers being able to reproduce this, take this whole thing with a grain of salt.

"Nothing like the internet to scare you about things you didn’t even know existed" - a message sent to the MakerMag Slack channel

So in hindsight, don't be scared of AI. The media likes to blow things out of proportion. There are amazing things happening and I'm happy that Machine Learning starts to get real uses in transportation, software businesses, art, fashion and more. When the next big headlines come, ping me.