Back in August of last year we announced a collaboration with one of the top technical universities in Europe – ETH in Zurich, Switzerland. They were building a unique open-source SoC-based platform called “PULP”, which is short for “Parallel Ultra Low Power”. This platform is based on the concept of combining multiple RISC-V cores, each with their own memory subsystem, with other embedded IP to increase compute bandwidth while dramatically reducing overall power consumption.
They wanted to integrate our embedded FPGA (eFPGA) technology to allow users to make intelligent software/hardware trade-offs in order to optimize the platform bandwidth/power curve for various applications. (In fact, they were the first to license our core for the Globalfoundries 22FDX® process node.)
A specific example that we discussed in the press release announcing the collaboration was using the embedded FPGA logic to accelerate feature extraction for AI-based functions. By offloading those functions from the RISC-V processors on the platform to one or more hardware-based eFPGA blocks, users can speed up their designs while actually reducing the amount of power they consume.
One significant benefit of this particular marriage between state-of-the-art software-based (RISC-V) and hardware-based (eFPGA) implementations for AI applications is that both sets of functions are programmable – dramatically increasing the platform’s flexibility and thus its ability to adopt the latest algorithms while staying extremely power efficient.
Yesterday, the PULP team released their first test chip, called “Arnold”. This test chip will allow us to create a single, reconfigurable platform to showcase the benefits of having AI feature extraction available all the way at the edge of the IoT network. This device will ultimately lead to low cost, high performance and low power solutions for a wide variety of edge IoT applications.
We are proud to be part of the Arnold development effort with ETH as we know that they share our vision that the right types of software/hardware-based architectural implementations can drive the creation of high bandwidth yet extremely power efficient platforms for endpoint designs. Having Arnold available will enable development teams to easily evaluate the performance and power benefits of using eFPGA technology for hardware co-processing for AI and IoT at the edge and we are very much looking forward to co-developing new applications with ETH on the PULP platform.
Have a question? Please contact firstname.lastname@example.org
Want to know how PULP and eFPGAs can be used for AI at the edge–See Tim’s presentation at the RISC-V Summit.