Responsible Research and Innovation
The process of developing and deploying emerging technologies is clouted in uncertainty at technical, ethical, and societal levels. Understanding where are we now and possible scenarios these technologies can bring about is essential to shape their transition from proof-of-concept systems to adopted products.
Responsible Research and Innovation (RRI) can be a useful guideline to developers and entrepreneurs in emerging technologies. A first principle to consider is the moral justification and proportionality of the proposed solutions. Careful analysis of the known risks and benefits should guide decisions on development and deployment of technological solutions. Moreover, it is advised to take a proactive stance on the ELSI (Ethical, Legal and Social Implications) implications of these technologies. As the some of the risks of these technologies can only be discovered once they are deployed to their customers, developers should establish mechanisms for continuous monitoring and mitigation of the effects of their technologies. Importantly, consumers should consider these products and services as being in continuous development and that there are underlying risks both known and unknown. Consequently, legal frameworks should provide efficient mechanisms for user protection, as well as clearly define liability and accountability issues in the case unknown risks materialize.
I am engaged in several initiatives aimed at promoting a responsible research and innovation approaches for emerging technologies. First and foremost, I head the Swiss office of the European CLAIRE Initiative on human-centred Artificial Intelligence. I also work with IEEE Sandards Association by chairing the group on Neurotechnologies for Brain-Machine Interface. This group is aimed at identifying gaps and priorities for standardization in this field. I also contributed to to the OECD working paper on Responsible Innovation for Neurotechnology Enterprises released in Autumn 2019
Documents
- Standards Roadmap: Neurotechnologies for Brain-Machine Interfacing, IEEE Standards Association. February 2020
- Responsible Innovation for Neurotechnology Enterprises, OECD working paper, Autumn 2019
- On the Need of Standards for Brain-Machine Interface Systems, IEEE Brain eNewsletter, 2017
Related Talks
-
Nov 10 2020: DEFTECH DAYS - Human-machine interface & Interaction, Thun, Switzerland. Invited speaker
-
Sept 7-9 2020: Talk “Overcoming Barriers for Safe and Efficient Applications of Brain-Machine Interfacing (BMI)/Brain Computer Interface (BCI) for Clinical and Consumer Applications through Standardization”. 1st IEEE International Conference on Human-Machine Systems, Rome, Italy. Invited speaker
-
Sep 4 2020: SIDO event, Lyon, France. Plenary speaker
-
Talk at Workshop Global Perspectives on Responsible Artificial Intelligence, Freiburg, Germany, June 2020
-
March 28 2020: Panel “At the interface of AI, Neuroscience and Policy”, Annual Conference Marie Curie Alumni, Zagreb, Croatia. Invited panelist
-
28 Janvier 2020: Geneva center for Security Policy (GCSP), Geneva, Switzerland Invited lecture: Leadership in international security course. Invited Lecturer
-
Nov 26 2019: Symposium Closing the regulatory gap for consumer neurotechnology, Brocher Foundation, Geneva
-
Oct 23 2019: Congreso Tecnología y Turismo para la Diversidad. Málaga, Spain. Press release (In Spanish)
-
Sept 16 2019: Workshop on Standards for neurotechnologies and brain-machine interfacing). Graz BCI Conference. Graz, Austria.
-
June 13 2019: Geneva Center for Security Policy Alumni connections & conversations. Geneva, Switzerland. Keynote talk
-
May 10 2019: Colloque international: ”Le Cyborg face au handicap”, Maison d’Ailleurs, Yverdon-les-Bains, Switzerland
-
March 19 2019: Open Geneva Festival. Cyber-Nexus Conference, Geneva, Switzerland. Invited talk: Brain-machine interfaces: state of the art and societal implications Links: Talk. Panel
-
November 28 2018: Cyborg days, Zurich, Switzerland
Sponsors and partners
![]() | ![]() |
![]() | |
![]() |
![]() |
![]() |
![]() |