Currently, computer vision is rapidly moving beyond academic research and factory automation. With the appropriate platforms and tools, the emerging possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc.
Vision, our richest sensor, allows mining big data from reality. While the number of image sensors deployed across all products in the world is a small fraction of the total number of sensors deployed, the amount of data generated by them dwarfs the amount of data generated by all other types of sensors combined. This has a cost, vision is arguably the most demanding sensor in terms of power consumption and required processing power.
Our objective in this project is to build a power-size-cost-programmabilty optimized core vision platform that can work independently and also embedded into all types of artefacts. The envisioned open hardware is being combined with carefully designed APIs that maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or “always-on” but not both at the same time.