Fly On Wall Street

Training real AI with fake data

AI systems have an endless appetite for data. For an autonomous car’s camera to identify pedestrians every time — not just nearly every time — its software needs to have studied countless examples of people standing, walking and running near roads.

Yes, but: Gathering and labeling those images is expensive and time consuming, and in some cases impossible. (Imagine staging a huge car crash.) So companies are teaching AI systems with fake photos and videos, sometimes also generated by AI, that stand in for the real thing.

The big picture: A few weeks ago, I wrote about the synthetic realities that surround us. Here, the machines that we now rely on — or may soon — are also learning inside their own simulated worlds.

How it works: Software that has been fed tons of human-labeled photos and videos can deduce the shapes, colors and movements that correspond, say, to a pedestrian.

What’s happening: Startups like Landing.ai, AI.Reverie, CVEDIA and ANYVERSE can create super-realistic scenes and objects for AI systems to learn from.

Synthetic data is useful for any AI system that interacts with the world — not just cars.

“We’re still in the early days,” says Evan Nisselson of LDV Capital, a venture firm that invests in visual technology.

Exit mobile version