Computing @ The Speed ​​of Light – EEJournal

I have to admit that I am starting to feel quite excited and excited. These are, of course, my usual states, so it may be difficult for a casual observer to tell if there is any difference. The reason for my current refreshment and enlivenment is that I spent much of this past weekend sketching out the presentations I will be giving in Trondheim, Norway in early September.

Did you know that Norway’s national bird is the white-throated diver? The reason I’m dropping this bit of trivia in our conversation is that Norway’s capital, Oslo, is on the southern coast of Norway, and Trondheim is about 390km (about 242 miles) north of Oslo when the white-throated sea diver flies.

Located on the south coast of the Trondheim Fjord, the town was founded as a trading post in 997 CE. A beautiful metropolis with a rich history, Trondheim was the capital of Norway during the Viking Age until 1217. Today, Trondheim is Norway’s third most populous municipality. The city is, among other things, home to the Norwegian University of Science and Technology (NTNU), where I will be giving a guest lecture to a bunch of MSc students in electrical and electrical engineering.

My lecture at NTNU will take place on Tuesday September 6. The next day I give the keynote presentation at the FPGA Forum, the premier annual event for the Norwegian FPGA community. This is where FPGA designers, project managers, technical managers, researchers, seniors and the major suppliers gather for a two-day deep dive into the latest and greatest happenings in the FPGA space (where no one can hear you scream).

It has been 10 years since I last presented the keynote at the FPGA Forum. That August event happened in February 2012, which is not the hottest month on the Norwegian calendar, I can tell you. On the plus side, after my reading on the forum, I got a magical train journey through the snowy landscape from Trondheim to Oslo. While I was there, I was allowed to give a guest lecture at the University of OsloI managed to get the . to visit Kon-Tiki Museum that contains the balsa raft that the Norwegian explorer and writer Thor Heyerdahl and friends used to cross the Pacific Ocean, and I found myself wandering through it Vigeland Park at midnight where – under the light of the moon – I was surrounded by hundreds of snow and ice covered images reminiscent of a scene from The Lion, the Witch and the Wardrobe by CS Lewis.

By comparison, at this year’s FPGA forum in September, I can look forward to a heady 13°C, accompanied by wind and rain (much like my holiday in England when I was a kid), meaning I’ll win’ I don’t have to spend a lot of space in my suitcase on sunscreen.

I am very much looking forward to both presentations. In the case of the college lecture, which will span two 45 minute periods, I have so many potential topics bouncing around my poor old noggin that I think it’s going to be hard to squeeze it all in.

When it comes to the FPGA forum itself, I’ve been thinking and mulling over how much things have changed in the last 10 years. For example, while they were lurking in the wings, as it were, things like artificial intelligence (AI), machine learning (ML), and deep learning (DL) were not on everyone’s lips. When we think about it, neither were virtual reality (VR), augmented reality (AR), augmented reality (DR), augmented virtuality (AV), mixed reality (MR), or hyper reality (HR).

In addition, although it has only been 10 years, there were some interesting FPGA companies that are sadly no longer with us, such as Tabula, which came third on the Wall Street Journal’s annual “Next Big Thing” list in 2012, but those are doors in 2015. On the other hand, some new players have entered the scene such as: Efinixwith its Trion FPGAs, and Renesaswith its ForgeFPGA family.

In the case of my keynote, I have no doubt that FPGAs will find their way into the conversation, but I don’t plan on spending too much time mulling over their nitty-gritty details. Instead, we’ll look at some of the exciting new applications that can take advantage of the opportunities offered by FPGA technologies. I also plan to talk about some of the amazing new technologies that could drastically change the way we do things in the future.

For example… a company I plan to introduce is: CogniFiber, whose claim to fame (well, tagline) is “Computing @ The Speed ​​of Light”. Recently I had a very interesting conversation with Dr. Eyal Cohen, CEO and Co-Founder of CogniFiber.

You have to wrap your head around it a bit – to be honest, I’m still wrapping mine around it – but I’ll explain it as best I can. Let’s start with the following image. My knee-jerk reaction when someone says “fiberglass” is to think of an optical cable containing multiple optical fibers, but I understand that in this case we are actually talking about optical fibers containing hundreds or thousands of optical cores. In traditional applications we try to keep crosstalk between cores as low as possible. For example, in the top part of the image, our input is an optical image, each of its pixels being sent through a core, and our output is the same image.

In comparison, what CogniFiber does is try maximize the crosstalk between cores and to use this crosstalk to perform AI/ML type calculations. The idea is that you can input an image at one end of the fiber and receive computed data at the other end (“is there a cat in this image?”). So essentially what we have here is a neural network implemented in a single optical fiber made up of thousands of cores.

Converting optical fibers into processors (Image source: CogniFiber)

I’m really still trying to wrap my brain around this. It all seemed so clear when Dr. Cohen explained it to me, but it all seems so vague now (which makes me think of Fuzzy Logic, but I refuse to be distracted).

The main things I remember are that it is possible to establish the parameters and coefficients of these neural networks and optimize these parameters during training in much the same way as with FPGA/GPU implementations, except of course that everything is done with light. In addition, depending on the diameters and proximity of the cores, it is possible to perform calculations with fibers with a Z value from 1 meter to a few microns.

Think about how an artificial neural network (ANN) used in a deep learning application usually works. Each layer in the network requires a gigantic matrix multiplication with countless parameters. The results generated by each layer are entered as input to the next layer, but this means that the intermediate results are stored temporarily. Now consider the CogniFiber equivalent as illustrated below.

On-the-fly computing without reading/writing to memory (Image source: CogniFiber)

The result of this in-fiber processing is to deliver a 100-fold boost in computing power, while consuming a fraction of the power of a traditional semiconductor-based solution.

I have to admit that I am quite excited by all of this, but we have to remind ourselves that we are still in the very early days of this technology. The current roadmap is for an Alpha Prototype launch in Q3 2022, Beta Prototype Acceleration in Q4 2022, Beta Deployment in Q2 2023 and Production in Q3 2023.

The thing is, right now I see all kinds of new optical technologies appearing on the scene. In fact, the folks at CogniFiber themselves have announced the development of a glass-based photonic chip that they say will bring their technology one step closer to revolutionizing edge computing. When I think about everything that has changed in the last 10 years, I’m excited to see what the next 10 years will bring. And you? Do you have any ideas you would like to share about this?

Leave a Comment

Your email address will not be published. Required fields are marked *