Forming Standards for a Better Future Working Together

IEEE Computer Society Team
Published 07/25/2025
Share this on:

Yhtian
An interview with Yonghong Tian, recipient of the 2025 Hans Karlsson Standards Award

Yonghong Tian stands as a global authority in the field of artificial intelligence and multimedia systems. Formerly serving as the Dean of the School of Electronics and Computer Engineering, now Vice-Dean of Peking University Shenzhen Graduate School and Dean of the new School of Science and Intelligence, and a Boya Distinguished Professor at Peking University, China, Professor Tian has made groundbreaking contributions to brain-inspired neural networks, distributed machine learning, and AI for Science. His visionary leadership as Chair of the IEEE P2941 Working Group led to the creation of the IEEE 2941-2021 standard, a milestone that bridges the gap between diverse computing architectures and algorithm frameworks. In this interview, he explains his journey, insights, and the impact of his work on shaping the future of AI and international standardization.

Your research in neuromorphic vision and brain-inspired computation is pioneering. What inspired you to explore these areas?

Our motivation grew out of two complementary forces: urgent practical demand and the intellectual pull of the scientific frontier. A decade ago, while working on video coding and conventional computer vision, my team and I noticed a critical gap. Frame-based cameras that capture 30 images per second simply discard most of the information that flows through the real world. For applications such as autonomous driving, that lost detail can spell the difference between safety and disaster. We asked ourselves how biology solves this problem, because the human retina never works in discrete frames and the brain operates with remarkable speed and efficiency on milliwatts of power. Beginning in 2014 we examined retinal signal transduction in detail, and by 2016 we built Vidar, our first retina-inspired visual processor. Vidar converts light intensity into one-bit spikes at forty thousand hertz, so it captures ultra-fast motion and copes naturally with extreme lighting conditions. However, traditional artificial neural networks cannot use spike streams directly, so we turned to spiking neural networks, the third generation of neural models that communicate through discrete events just like biological neurons. To accelerate research in this area we released SpikingJelly, an open-source training framework that is now used worldwide. In essence, the eye taught us how to sense and the brain taught us how to compute. By uniting neuromorphic sensors with spiking processors we hope to bring machines closer to the elegance, speed and frugality of natural intelligence, and that mission continues to inspire us every day.

As Dean of the School of Electronics and Computer Engineering at Peking University, how do you foster innovation and interdisciplinary research among faculty and students?

I actually concluded my term as Dean of the School of Electronics and Computer Engineering earlier this year and now serve as Vice-Dean of Peking University Shenzhen Graduate School and Dean of the new School of Science and Intelligence. The latter was established precisely to make cross-disciplinary work routine, because artificial intelligence is reshaping the way we pursue basic science, especially in materials and the life sciences. We encourage innovation on three fronts. First, our research grants follow a dual-PI model. Every “AI + Science” project is co-led by one principal investigator from a scientific domain and another from AI, who share resources and milestones. This structure ensures that deep domain problems and cutting-edge methods evolve together rather than in isolation. Second, our doctoral training adopts a dual-advisor system. Each student is guided by one mentor who defines the scientific question and a second who specialises in AI techniques that can answer it. For instance, a student working on AI for structural biology pairs a biologist who frames the problem with an AI researcher who designs the learning pipeline. The same approach drives our recent AI-materials collaborations, which have already produced joint papers in specialist journals. Third, we align incentives with this new workflow. Promotion guidelines, seed-fund competitions and annual awards now recognise shared publications, code releases and datasets that arise from genuine interdisciplinary partnership. Dedicated colloquia and matchmaking workshops help faculty spot complementary expertise and launch new teams quickly. By coupling shared leadership, shared mentorship and shared evaluation, we give both faculty and students clear reasons and concrete mechanisms to cross disciplinary boundaries and to turn AI advances into scientific breakthroughs.

With over 85 patents and 200 publications, how do you balance academic research with practical technological applications?

For me, scholarship and real-world impact form a single pipeline rather than two competing goals. Every project begins with a concrete need: an ability that industry, medicine, or national infrastructure still lacks. We then trace backward to the scientific unknowns that block progress. This need-first, frontier-second mindset keeps our work both relevant and ambitious. Once a project starts, we run it on two synchronized tracks. One team tackles the fundamental questions that yield journal papers, such as new learning-theory bounds or novel device physics. A second, often overlapping, team converts those advances into prototypes robust enough for field trials. As soon as a prototype shows a clear performance edge, we file a provisional patent to protect the core idea, and we publish the broader scientific framework so the community can build on it. This sequence (discover, patent, publish) has proved faster than the traditional publish-then-license path and keeps both our citation count and our technology-transfer metrics strong. We also measure success with a dual scorecard. Faculty and students earn recognition not only for high-impact papers but also for deployed systems and downstream revenue. Practical application therefore fuels basic research, and basic research feeds back into application.

Your involvement in developing retina-like visual sensing is groundbreaking for high speed imaging. Where have these been used and what potential do you see for these technologies in further real-world applications?

Retina-inspired spike cameras already serve the high-speed imaging and measurement field. The novel spike camera captures motion at sampling rates up to 40,000 Hz, filling in the temporal gaps that conventional frame-based imagers leave behind. These high-speed imaging capabilities translate into concrete deployments. In autonomous driving the sensors detect sudden pedestrian steps and unexpected debris, providing extra reaction time that a thirty-frame-per-second camera would miss. Drone pilots use them to dodge obstacles while flying at high velocity. Rail engineers monitor bullet train wheels for early fault signatures, and assembly lines count small parts racing past with millimetre precision. Sports scientists record an athlete’s full sprint without motion blur, while biophysicists track the rapid wingbeat of birds or the motion of fast moving cells under a microscope. Each of these tasks benefits directly from reliable information at up to forty thousand hertz. Looking ahead we have built multi camera spike arrays that merge their outputs for three dimensional high speed measurement. As the technology matures it will not replace conventional imagers so much as augment them, stepping in whenever motion is too fast or lighting too harsh. We believe that combination will open doors in robotics, intelligent manufacturing and scientific discovery where both temporal acuity and resilience to light extremes are indispensable.

Standardization efforts, such as IEEE 2941-2021, are crucial for technological advancement. What challenges did you face in this process?

One difficulty was the furious pace of innovation in neural networks. New architectures and operators appear every few months, while a standard must stay stable enough for industry to build on it. We handled that tension by locking the core requirements and framework for a fixed period, then scheduling regular reviews that let us publish additional profiles when a major breakthrough proves mature and widely adopted. In this way the document stays current without forcing companies to rewrite their toolchains every quarter. A second challenge was choosing which technical proposals deserved to be codified. Researchers and companies brought many competing ideas, so we opened a public call for contributions, released a common benchmark suite and evaluation protocol, and required every submission to run on the same tests. Proposals that demonstrated clear, reproducible gains earned their place in the specification. This transparent “best wins” process kept the discussion objective and helped the community converge on the most robust solution. Other issues such as managing intellectual property surfaced as well, but those are shared by nearly every standards effort. The two hurdles above were unique to the fast moving world of AI and defined much of our day-to-day work on IEEE 2941-2021.

How does your role at Pengcheng Laboratory complement your academic endeavors at Peking University?

Pengcheng Laboratory and Peking University give me two very different but perfectly matched platforms. At Peking University I work within a principal investigator model where small teams push the scientific frontier, often through bold algorithmic ideas that spring from students’ creativity. Pengcheng, by contrast, is a public‐interest institute with national-scale computing and the organizational muscle to rally dozens of engineers around one grand project. Our field demands both kinds of effort. Fundamental questions about neural computation begin in the university lab, yet turning those insights into a full-scale hardware system or a city-level application calls for the coordinated resources that Pengcheng can provide. When we need petaflop clusters or specialized fabrication support, Pengcheng makes them available. When Pengcheng launches a large system build, the latest algorithms and fresh perspectives come from our campus group. This arrangement is similar to joint university–national-lab roles in the United States: wide infrastructure on one side, agile discovery on the other. By combining them we can explore new science and deliver working technologies faster than either institution could manage alone.

In the rapidly evolving field of AI, how do you ensure that your research remains at the cutting edge?

I keep our work at the forefront by following two principles that have guided me from the beginning. First, I focus on bottlenecks that hold back today’s dominant AI systems. One clear example is energy consumption: state-of-the-art language or vision models can draw kilowatts, whereas the human brain, with far more neurons and synapses, runs on about twenty watts. That contrast convinced us to study biologically inspired networks so we can design algorithms and hardware that achieve comparable capability on a tiny power budget. Second, I deliberately explore alternative technical paths rather than incremental improvements to existing ones. Spiking neural networks and brain-like computation form a separate lineage from conventional artificial neural networks. The manufacturing processes and software stacks are still immature, so industry moves cautiously, but asking whether this path can work is precisely the kind of question academia is meant to tackle. By addressing the critical problems and testing unconventional ideas, we help define the next wave of AI instead of merely keeping pace with the current one.

What advice would you offer to young researchers aspiring to contribute to neuromorphic computing and AI?

Begin with a genuine problem that matters, not with whatever happens to be fashionable. Examine your own resources, skills, and curiosity, then look for a technical bottleneck that still resists a clear solution. If you can push that single obstacle aside, you will make the whole field advance. Once you have identified such a challenge, stay with it; depth over breadth is what turns a student project into world-class work. When choosing a technical path, aim for questions whose answers will remain valuable for years. Incremental tweaks to today’s commercial systems are usually solved faster by industry, so concentrate on long-horizon scientific issues such as energy efficiency, real-time learning, or brain-scale integration. Pursue the route that excites you, because sustained passion is your best fuel. Do not let publication become the only yard-stick for achievement. A widely adopted open-source tool, a hardware prototype that demonstrates a new principle, or a contribution to an international standard can all be as influential as a paper in a top journal. Try to link your fundamental insights to potential applications early on; the balance is hard, but it will keep your work both rigorous and relevant. Finally, build your own signature. Once you commit to a direction, persist until your name is associated with a distinctive contribution. Collaboration will help, but the clarity of your personal vision is what will guide every partnership and every experiment you undertake.

Yonghong Tian is the Vice-Dean of Peking University Shenzhen Graduate School and Dean of the new School of Science and Intelligence, as well as a Boya Distinguished Professor with the School of Computer Science, Peking University.