The TULIPP (Towards Ubiquitous Low-power Image Processing Platforms) project, initiative targeting the development of high performance, energy-efficient embedded systems for the growing range of increasingly complex image processing applications, has been announced. The TULIPP project is being funded with nearly €4 million from Horizon 2020, the European Union’s biggest research and innovation programme to date.
TULIPP will focus on developing a reference platform for vision-based system designers that defines a set of guidelines for the selection of relevant combinations of computing and communication resources to be instantiated in the platform while minimising energy resources and reducing development costs and time-to-market.
These guidelines will tackle the design issue complexities surrounding the next generation of embedded image processing applications that are emerging in a range of industry sectors.
From an applications perspective, these complexities relate to the need for guaranteed, high performance computing power coupled with greater power efficiency within the context of embedded design requirements.
In terms of the available target silicon, software designers must be able to deal easily with parallel programming issues presented by multicore devices as well as the heterogeneity of different programming models and APIs.
The guidelines for the reference platform define what a piece of hardware or software must look like in order to be TULIPP-compliant. As part of the project, TULIPP will use these guidelines to develop an instance of the TULIPP reference platform comprising a scalable low-power board designed to meet typical embedded systems requirements of size, weight and power (SWaP), a low-power operating system and image processing libraries, and an energy-aware tool chain.
In addition, TULIPP will develop three use case demonstrators as proof-of-concept and validation of the reference platform. These use cases will cover different industrial domains with emerging complex image processing requirements and will include: a medical imaging surgical X-ray system designed to significantly reduce radiation doses by 75%; a smart automotive embedded vision system for advanced driver assistance (ADAS) that, in addition to the low-level image processing, intelligently interprets what is on the images to deliver safer driving experiences; and an embedded image processing system to create smart drones and Unmanned Aerial Vehicles (UAVs) for the intelligent search and rescue of survivors at disaster incidents.
By the end of the project in 2018, TULIPP expects its work to extend the peak performance per Watt of image processing applications by 4x and average performance per Watt by 10x. Beyond the official completion of the TULIPP project, it is expected that this will be extended to 100x and 200x by 2023.
The TULIPP consortium members are drawn from both industry and academia and include Thales (France) as the project lead and co-ordinator, with Efficient Innovation SAS (France), Fraunhofer IOSB (Germany), Hipperos (Belgium), Norges Teknisk-Naturvitenskapelige Universitet (Norway), Ruhr-Universität Bochum (Germany), Sundance Multiprocessor Technology (United Kingdom), and Synective Labs (Sweden), providing the additional inter-disciplinary expertise required to make the project a success.
TULIPP will work closely with various standards organisations to propose the formal adoption, on an industry-wide basis, of new standards derived from its reference platform. TULIPP is also seeking to establish an Advisory Board of vision-based systems stakeholders to review the work of the project on a progressive basis and help extend the reach of image processing applications into new and additional industry sectors.
“Image processing applications stretch across an increasingly broad range of industrial domains and are reaching a higher level of complexity than ever before,” said Philippe Millet of Thales and TULIPP’s Project Co-ordinator. “The TULIPP reference platform will give rise to significant advances in system integration, processing innovation and idle power management to cope with the challenges this presents in increasingly complex vision-based systems.”