In March the European Artificial Intelligence & Society Fund published an interesting report ‘How public money is shaping the future direction of AI: An analysis of the EU’s investment in AI development‘.
The summary mentions that Europe has invested substantial amounts in the digital decade programme, offering opportunities for an alternative to US and Chinese digital models. However, during the research, the researchers encountered problems (markup by me):
In this research we look at the record of previous EU research programmes to understand how funding flows in practice and whether the goals set out by the EU can be delivered.
Our efforts, though, have been hampered by one of the main challenges of the EU’s funding system, namely the lack of accessible data on its funding ﬂows and lack of comprehensive reporting available on the FP’s results and impact. Through an arduous process involving the scraping of data from numerous sources, we have now collected a dataset that allows for analysis of past programmes. We have made this publicly available and invite other researchers to interrogate it further.
Unfortunately, we ﬁnd this opacity is characteristic throughout the funding ecosystem, from the design of programmes, to the allocation of funds, to the evaluation of outcomes. This both hinders the capacity of the EU to realise its stated objectives and undermines the credibility of its commitments as they cannot be effectively scrutinised.
These shortcomings are true across the FP system. But there are additional gaps speciﬁc to AI that must be addressed to meet the EU’s ambitions to foster trustworthy innovation. We ﬁnd a persistent tendency towards techno-solutionism – the development of technology for technology’s sake without consideration of the societal application and beneﬁts. We ﬁnd issues of trustworthiness and responsibility are not integrated into the calls for proposals, but are siloed as separate areas of study. And we ﬁnd there is no effort to involve civil society in either the design or receipt of funding in order to represent the public interest in the development of AI.
Before investing further public funds, we recommend some practical remedies: publicly accessible data, effective evaluation of the real-world impacts of funding, and mechanisms for civil society participation in funding. Unless the EU addresses these failings, the laudable aims of its strategy to be the epicentre of trustworthy AI will be fundamentally undermined.
It shows that Europe is insufficiently in touch with society when undertaking digital projects
The report was published on the site of a company, Eticas Research & Consulting, that describes itself as a company that has the “aim of working on the impact of social, ethical and legal aspects of security policies, innovations and technological developments“.