By 2020, the value of the advanced driver-assistance systems (ADAS) market is estimated to be $60 billion, with a compound annual growth rate (CAGR) of almost 23%. ADAS is the new darling of the automotive industry, driving most of the innovation in this vast ecosystem. We hear about future product announcements and news in this area almost every day. For example, Google recently launched its own car company, Uber is actively developing future-oriented autopilot taxis and GM sponsors the new MCity Facility in Detroit. .
Source: Ian Riches' speech at the 2015 GPU Technology Conference at Strategy AnalyTIcs "Vision-Based ADAS: Seeing the Way Forward"
Two contradictory trends
These companies and many other companies related to the automotive industry are currently facing challenges, driving the engineering team to the highest level of artificial intelligence and deep learning algorithms to date. These automotive engineering teams are now creating advanced technologies developed by NASA, DARPA and several military/defense agencies and will be applied to consumer electronics such as smartphones, wearables and drones.
The challenges facing the automotive industry are unique because it combines two trends that are sometimes contradictory: on the one hand, driving trends are moving in the direction of autonomous driving, leading to the need for cars to meet the most demanding safety standards and fault coverage. . On the other hand, in terms of cost sensitivity, cars are still consumer devices, so these ADAS systems must be inexpensive additional devices, especially standard agencies such as Euro-NCAP and NHTSA have incorporated them into their safety rating systems. This requires that every industry insider (including algorithm developers, system manufacturers, chip vendors, and processor designers) jump out of mind and work closely together across the ecosystem.
Taking the relatively simple ADAS task of “lane deviation warning†as an example, lane deviation is well handled under “normal†driving conditions; but when environmental conditions are less than ideal, such as rain, fog, direct light reflection, or In high-contrast scenes (such as leaving the tunnel), the problem is coming. It is then necessary to apply a variety of unique filters and imaging algorithms to clean up the image, blending multiple images (sometimes from different sensors) and other technologies. This problem becomes more challenging when road signs and lanes are not clearly marked, lanes are narrowed or roads are being constructed (so that lane markings are stratified with different colors, or lane markings disappear), or there are no lane markings at all. Sex. In addition, there are a number of ways to mark lanes, some of which depend on geographic location, such as continuous lines, dashed lines, double lines, and even some roads using raised markers (Botts points) instead of general road markings. The system must now mimic the human brain (do I mention artificial intelligence?) to collect various indications, including road markings, road edges, construction signals, the location of the vehicle ahead, the location of the vehicle coming from the opposite lane, and the traffic sign. Etc. to infer the "lane".
Utilize energy efficient processors
We are now trying to convert this lane departure warning system into a lane keeping system that actively adjusts the direction of the vehicle so that the vehicle remains in the lane in autonomous driving conditions. The accuracy of the system only needs to be similar (and sometimes more than the latter) to human visual and intelligent systems. To meet the tight power and cost budgets, and to implement the advanced algorithms needed to process large amounts of visual information, it is only possible to use energy efficient processors.
In order to truly achieve autonomous driving, the car will need to continuously infer every object around the body (cars, traffic signs, road markings, lanes, pedestrians, bumps, cyclists, traffic lights, debris on the road, etc.) to understand different objects. The relationship between (for example, the correlation between the behavior of the vehicle ahead and the traffic light) and making the right decision in an instant. Detecting (and maintaining) the driver's lane is actually just the tip of the iceberg.
In the future, we can foresee that this kind of artificial intelligence system that is vital to safety will also encounter ethical issues, such as in the event of an inevitable collision, the vehicle should be hit by an animal hitting the road, or it will crash into Pedestrians on the sidewalk? How should the system prioritize the safety of drivers and the safety of others on the road? Considering that artificial intelligence needs to judge so many factors, we should not expect fully self-driving cars to be available in the next 10 years; but advanced vision-based algorithms and related energy-efficient embedded processors will certainly have some obvious progress.
A fishing rod Braided Sleeve is a protective covering made of braided material that is designed to be slipped over a fishing rod. It provides added protection to the rod, preventing it from getting scratched or damaged during transport or storage.
Fishing Rod Braided Sleeve,Fishing Rod Sleeving,Pet Fishing Sleeving,Fishing Rod Sleevings
Dongguan Liansi Electronics Co.,Ltd , https://www.liansisleeve.com