We started in applied computer science research and evolved into a team obsessed with one thing: turning scientific results of state-of-the-art AI performance into real-world applications on custom hardware.
We believe that the future of AI inference isn't about faster software — it's about eliminating the software layer entirely. By synthesizing neural networks directly into FPGA hardware, we unlock performance characteristics that generic accelerators simply cannot achieve.
Our foundation is in applied computer science and embedded systems, where we studied the design of neural networks, arithmetics, and AI training specialized for FPGA fabrics. Following this approached yielded ground breaking results in research settings, demonstrating inference latencies orders of magnitude lower than traditional methods and surpass the state-of-the-art. Now, we're taking the next step: building a team and company to turn these scientific breakthroughs into real-world products that can transform industries.
Our work bridges the gap between machine learning and hardware design through cutting-edge research in FPGA-friendly computing.
Optimizing arithmetics for FPGA implementations because every bit of accuracy counts.
Specialized fixed-point and quantized arithmetic techniques optimized for FPGA resources, maintaining model accuracy while maximizing throughput.
Co-design approaches that train neural networks with hardware constraints in mind, creating models that map efficiently to FPGA silicon.
End-to-end systems for deterministic, low-latency inference on streaming data with guaranteed cycle-accurate timing.
Started exploring the potential of custom FPGA synthesis for neural network inference at the university research level.
Demonstrated ultra low latency inference on custom models, validating the core approach and publishing results.
Demonstrated sub-microsecond inference latencies on custom models, validating the core approach and attracting collaborators.
Launched UFAIRA with a full toolkit for custom FPGA synthesis. Engaging with industry partners across finance, robotics, and telecom.
A multidisciplinary team bridging machine learning, hardware design, and real-world engineering.
PhD Student in at Fulda University of Applied Sciences, Department of Applied Computer Science, specializing in neural network co-design for FPGA implementation. Expertise in FPGA-friendly arithmetics and hardware-aware training methodologies.
He is currently a Professor for Embedded Systems at the Fulda University of Applied Sciences, Germany. His research interests are application-specific arithmetic and its optimization, as well as architectures for deep learning with particular emphasis on reconfigurable systems.
We're looking for talented engineers and researchers. Reach out to explore collaboration opportunities.