Terrible traffic, prayer-inducing merges, dangerous road conditions, and drivers who hazard sudden, scream-worthy maneuvers, all add to Moscow’s commuting woes. Sadly, this is what 98 percent of the world’s roads are like, and why one Russian company, Cognitive Technologies Group, may come out ahead in the race to birth the self-driving car.
President and founder of the group Olga Uskova, is skeptical of Silicon Valley’s sunny projections of when autonomous vehicles will go mainstream. The reason? As she told The Guardian, there are too many variables in most places to look out for. In Moscow for instance, “The environment is ever-changing: the snow has covered traffic signs; it’s raining on your windshield, the sun is blocking you. Our people train using these kinds of data.” Note that the most well-known autonomous prototypes, the Financial Times recently reported, have trouble navigating through snow. Uskova assures that her model doesn’t have that problem.
Cognitive Tech. began in 1993 when two of its founders developed the world’s 1st computer chess master, Kaissa. Besides this, they’ve sold software to the likes of Intel and Yandex. In 2014, the company launched its autonomous vehicle program— Cognitive Pilot (C-Pilot), Russia’s first and largest player in the nascent autonomous vehicle market.
A Nissan X-Trail equipped with a C-Pilot system. Credit: Cognitive Technologies.
Their secret isn’t any specialized software–like Tesla’s Autopilot or hardware–like Mobileye’s patented microchip. They took a different approach. Instead, Uskova and her team taught an A.I. program the intricacies of driving in Moscow. They did this by exposing it to 100,000 dashcam videos and other footage collected by Moscow State University.
Uskova and her team put together a neural network using the footage, which they say allows their vehicle to better maneuver around the mean streets of Moscow. By utilizing run-of-the-mill computer hardware, their incarnation becomes less expensive than competitor versions and easier to upgrade.
Cognitive technologies hopes to put out a level four autonomous vehicle by the end of 2019. That’s not all. They’ve partnered with Russian truck maker Kamaz to develop a self-driving tractor trailer by 2020, and Uskova and colleagues plan to have an autonomous combine harvester farm ready by 2024.
And their car prototype? So far, they’ve rigged out a Nissan X-Trail with a C-Pilot system. It can recognize three dozen road signs with almost 100% accuracy, as well as stop, accelerate, and heed traffic lights. Now, the company is setting up two US offices, reaching out to English speaking media, and seeking additional funding. It also demoed C-Pilot at the latest Consumer Electronics Show (CES), held every January in Las Vegas. One snag—visa issues due to a heating up of tensions between the US and Russia, have made it difficult for Cognitive Technologies to gain a solid foothold in the US.
Credit: Cognitive technologies.
So how does their system work? Recently, I asked Uskova via email. First, high resolution cameras, imaging radar, and a bevy of onboard sensors collect data, which is fed into one of four operating systems: the observer module—which monitors the car’s surroundings, the geographer module—which pinpoints the location of the vehicle, the navigator module—which finds the quickest route, and the machinist module—which handles the physical driving of the vehicle. All of this raw data is processed and then blended together by a deep learning neural network, provided by an energy-efficient onboard processor.
Similar to a biological brain, it absorbs and processes the information and then decides how to proceed. Most self-driving cars use LIDAR (Light Detection and Ranging), which works much like radar but instead of radio waves, uses beams of infrared light. In other words, it relies on invisible lasers to sense the environment. I asked what type of system C-pilot uses.
“Our main sensors are radar and cameras, not LIDAR,” Uskova said. “We believe that radar is the future of autonomous driving, as it is the most appropriate sensor for this technology. Radar is significantly more reliable in bad weather (snow, rain, fog). Our radar constructs a dynamic 3D projection at a distance of 150-200 meters (492-656 ft.). When the weather gets worse—the range falls to just 100 m (328 ft.).” Radar is also more cost-effective.
According to Uskova, the autonomous vehicle market is just beginning to firm up, with major players taking positions in certain niches. Cognitive technologies believes their advantage comes in sensor technology. “The human eye has a much higher resolution in its central part. When we try to zoom-in and look closer at something—we use foveal vision. The same method is used in C-Pilot’s Virtual Tunnel tech. Its algorithm tracks all movements and focuses attention on the main risk zones,” she wrote.
President of Cognitive Technologies Olga Uskova. Credit: Getty Images.
Uskova also said:
We also believe that within the next 10 years, as processor capacities grow, the resolution of sensors will also increase significantly. Now the cameras for autonomous vehicles have a resolution of 2-5 megapixels, and the resolution of the human eye can be estimated at 100 megapixels. And for better detection of small objects and animals, the resolution of the onboard cameras should grow. Now, our system can recognize the average size animal at a distance of up to 30 meters (98 ft.).
I asked what makes her system different from those being developed by Uber, Waymo (Google), other Silicon Valley companies, and the big automakers, Ford in particular. To date, there are 27 companies working on autonomous vehicles. “At the moment, we are the best in the world in the field of road scene perception and detection,” she said. “We have 19 unique patents and inventions. 22 million dollars have been invested in the product and we have real industrial practice in the most severe weather conditions.”
To witness the C-Pilot system in action, click here.