Skip to content
  • About Us
    • Company Profile
    • Our Team
    • Shareholders & Members
    • Opportunities
  • Services
    • Launch of Industry-Driven R&D Initiatives
    • Securing of Research Funding
    • Management of European R&D Projects
    • Management of European R&D Programmes
    • Studies on Strategic Topics in ICT
  • EuresTools®
    • EuresTools Reporter
    • EuresTools Tracker
    • EuresTools Workspace
    • EuresTools Website
    • EuresTools Conferencing
    • EuresTools Mailing List
    • EuresTools Pricing
  • Projects
    • Ongoing Projects
    • Past Projects
  • News
    • News
    • Events
    • Eurescom message
  • Contact
    • Travel information
    • Location
Linkedin Twitter Youtube
 

How Eurescom is Paving the Way for Digital Partnerships via the Indo-Pacific collaboration: INPACE

 

Adam Kapovits
Eurescom

Alessandro Bassi
Eurescom

As the digital transformation of global economies accelerates, fostering international cooperation in digital technology has become essential. To support this vision, the EU-funded INPACE project was launched in January 2024. INPACE plays an important role in supporting the implementation and advancement of the Digital Partnerships established between the European Union and key Indo-Pacific countries, including Japan, the Republic of Korea, and Singapore, and through the Trade and Technology Council with India.

What is INPACE about?

The INPACE project, short for Indo-Pacific-European Hub for Digital Partnerships: Trusted Digital Technologies for Sustainable Well-Being, is a Coordination and Support Action designed to foster digital cooperation across the EU and the Indo-Pacific region. Running until June 2027, the project unites 21 European and Asian partners, creating a hub to support collaborative research, policy development, and industry connections. INPACE’s mission is to enhance digital technology partnerships and advance cooperation in strategic areas that benefit both regions. The digital partnerships cover an extremely wide range of technologies, notably through the collaboration in 16 Thematic Working Groups (TWGs) organised under 5 clusters (see Figure below).

The Context Behind INPACE: Strengthening EU-Asia Digital Cooperation

The European Union has made strides in building robust partnerships with Asia in digital technology fields. Guided by the EU’s Digital Compass Strategy and Strategy for Cooperation in the Indo-Pacific, Digital Partnerships were established with Japan and South Korea in 2022, and Singapore in 2023. Additionally, the EU initiated the Trade and Technology Council (TTC) with India to foster collaboration on technology and trade. These partnerships underscore a shared commitment to digital growth and emphasize the EU’s strategy to enhance cooperation on emerging digital technologies and develop policies for sustainable growth and resilience.

Objectives of INPACE

INPACE aims to turn these high-level partnerships into impactful projects by:

1. Supporting Digital Partnerships: Facilitating the EU’s Digital Partnerships and the TTC with India, translating these into tangible outcomes.
2. Boosting International Collaboration: Encouraging digital cooperation between Europe and Asia by connecting leading researchers, industry leaders, and policymakers.
3. Enhancing Research and Innovation Collaboration: Promoting joint research and industrial collaboration to drive technological advances and commercialization.
4. Fostering Policy Convergence: Supporting international digital policies to enhance synergies, inform policymaking, and facilitate international dialogue.
5. Promoting Human-Centered Technologies: Developing digital technologies that prioritize human-centric values for inclusivity, sustainability, and security.

The INPACE Symposium: Building Bridges in Digital Technology and Policy

The first INPACE Symposium on Digital Technologies and Policies took place on October 21-22, 2024, at the Daeyang AI Center, Sejong University, in Seoul, Republic of Korea. This two-day event brought together experts, policymakers, and industry leaders from Europe and the Indo-Pacific region to discuss key developments and policies in digital technology. Major topics included:

  • Trusted AI: Leveraging the power of AI responsibly, with an emphasis on ethics and reliability.
  • Semiconductor Innovation: Highlighting the latest advances in chip technology essential for digital infrastructures.
  • Future Networks: Analyzing the progress in next-generation connectivity and communication.
  • Cybersecurity: Addressing strategies to protect digital networks and data privacy.

Eurescom’s Dual Role at the INPACE ­Symposium

Eurescom plays a pivotal role at the INPACE Symposium with two significant contributions in the Technical Thematic Sessions and Workshops on October 21, 2024:

  • The first session on “5G/6G technologies” was moderated by Eurescom’s Project Manager Adam Kapovits. Joined by speakers professor Rui Aguiar, professor Hyonwoo Lee, and professor Sunwoo Kim, the session examined the current challenges in 6G development and with a focus on how Europe and the Republic of Korea can collaboratively address these issues, it surveyed the Republic of Korea and European 6G development strategies and research programmes. The technical discussion offered valuable insights into the future of connectivity and the various approaches and highlighted the potential for collaboration in next-generation networks development.
  • The second session, titled “EU Funding Opportunities” explored the new avenues for collaboration following the upcoming association of Republic of Korea with Horizon Europe. Moderated by Dr. Svetlana Klessova (GAC) and Adam Kapovits (Eurescom), the session outlined various funding opportunities within Horizon Europe, paying particular attention to Pillar II, which addresses global challenges and European industrial competitiveness. Dr. Klessova presented examples of Horizon Europe’s opportunities, while Mr. Kapovits discussed the Eureka programme, highlighting the CELTIC-NEXT initiative in which Korea is actively engaged.
    A Q&A session followed, providing participants an opportunity to connect directly with the INPACE consortium and explore collaborative funding options in more detail.

Looking Ahead: The Future Impact of INPACE

INPACE will continue to work and align the strategies to further the goals of the Digital Partnerships and the Trade and Technology Council (TTC), fostering deeper cross-continental cooperation, advancing research and technology development, and supporting policy alignment.

In the coming months, INPACE has an exciting line-up of activities, including a series of webinars and a one-day workshop on 6G to be held at the University of Tokyo during EU-Japan Digital Week from 31st March 2025 to 4th April 2025. The second major symposium of INPACE is planned for the Autumn of 2025 in Singapore, building on the momentum of collaboration and innovation across regions.

This continued partnership will not only drive technological advancement but also contribute to a sustainable, inclusive, and prosperous digital future for Europe and the Indo-Pacific. The community is encouraged to stay tuned for these impactful events and initiatives that will further elevate digital cooperation and 6G innovation.

Further information

  • https://inpacehub.eu/rok-symposium-october-2024/
  • https://inpacehub.eu/2024/06/11/partner-interviews-eurescom/
  • https://smart-networks.europa.eu/


INPACE team after the first day of symposium workshops


INPACE symposium closing picture on Digital Technologies and policies in Seoul

INPACE indo-pacific collaboration

AI Act – a conscious choice

Pooja Mohani
Eurescom GmbH

What is AI and why is it important?

AI is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity [1]. It enables systems to perceive their environment, solve problems and act to achieve a specific goal. These systems gather data – through attached sensors, process this data and respond. They are capable of optimizing and thus adapting their behaviour by learning and work autonomously.

Whilst some AI technologies have been around for more than 50 years, recent advances in computing power, with availability of enormous amount of data and new algorithms, have led to major AI breakthroughs in recent years.

Artificial intelligence is already present, influences our everyday life and is digitally transforming our society and thus has become an EU priority.

What is AI ACT?

To prepare Europe for an advanced digital age, the AI ACT provides a comprehensive legal framework on AI, which addresses the risks of AI within Europe and sets the tone for upcoming AI regulations worldwide. The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI [2].

The AI Act aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensuring that AI in Europe respects set values and rules, and harnesses the potential of AI for industrial use.”

— European Parliament News

Why is it needed?

To make sure that AI systems used in EU are safe, transparent, traceable, unbiased, trustworthy and environmentally friendly. AI systems should be overseen by people, rather than by automation, thus should foster inclusiveness.

The AI Act fosters the development of trustworthy AI in Europe, which also includes the Innovation Package and the Coordinated Plan on AI [3] . Together, these measures assure the health, safety and fundamental rights of people, and providing legal certainty to businesses across Europe. Overall, these initiatives would strengthen EU’s AI talent pool through education, training, skilling and reskilling activities.

Whilst the existing legislation provides protection, however it is insufficient to address AI system specific challenges, the proposed rules will be able to:

  • address risks created by AI applications/services;
  • prohibit AI practices that pose unacceptable risks;
  • determine a list of high-risk and set clear requirements for such applications;
  • define specific obligations for deployers and providers of high-risk AI applications;
  • require assessment before a given AI system is put into service or placed on the market;
  • overall, establish a governance structure
To whom does the AI Act apply?

This legal framework is applicable to both public and private actors inside and outside the EU as long as the AI system affects people located in the EU [4].

It is a cause of concern for both the developer of such system as well as the deployers of AI systems (high-risk). Importers of AI systems must also ensure that the provider (foreign) carries out appropriate conformity assessment procedure, bears a European Conformity (CE) marking and is accompanied by required documentation and instructions of use.

In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative AI models. Providers of free and open-source models are exempted from most of these obligations; however, the exemption does not cover obligations for providers of general-purpose AI models with systemic risks.

Research, development and prototyping activities preceding the release in the market are exempted and the regulation furthermore does not apply to AI systems that are exclusively for military, defence or for national security purposes, regardless of the type of entity carrying out those activities [4].

What happens if you don’t comply?

AI systems that do not respect the requirements of the Regulation, would attract penalties, including administrative fines, in relation to infringements and communicate them to the Commission [4].

The Regulation sets out thresholds that needs to be taken into account:

  • Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
  • Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation, including infringement of the rules on general-purpose AI models;
  • Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request;

To conclude, the AI Act for the ICT industry aims to foster innovation while ensuring that the developed AI technologies are deployed responsibly, ethically, and in the best interests of society. It would provide assurance to its users and guidance for stakeholders, overall contributing to the sustainable growth and adoption of AI technologies.

Further information

[1] https://www.europarl.europa.eu/topics/en/article/20200827STO85804/what-is-artificial-intelligence-and-how-is-it-used

[2] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[3] https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383

[4] https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683

 

AI AI Act

The hidden battle

Russia’s cyberwarfare against Ukraine and the West

Milon Gupta
Eurescom

A month after Russia’s invasion of Ukraine had started on 24th February 2022, some commentators were wondering why they could not see any signs of Russian cyberwarfare, which was expected to accompany the bombing of Ukrainian cities. As we know now, this first impression was completely wrong.

On 1st March, The Economist published an article under the headline: “Cyber-attacks on Ukraine are conspicuous by their absence”. [1] And an article in Nature, published on 17th March 2022, asked in the headline: “Where is Russia’s cyberwar?”, followed by this first sentence: “Many analysts expected an unprecedented level of cyberattacks when Russia invaded Ukraine — which so far haven’t materialized.” [2] It is a matter of debate what you consider “unprecedented” after the already constantly high level of cyberattacks by Russia against Ukraine since the annexation of the Crimea peninsula in 2014. However, just because the massive attack was not fully visible to some Western experts does not mean that it did not take place.

Russia’s cyberattack on Ukraine

At the end of March it emerged that intensive Russian cyber-attacks accompanied the Russian invasion. In a press conference on 29 March, the deputy director of the Estonian Information System Authority, Gert Auväärt, rang the alarm bell and announced that the cyber threat level in Estonia had risen following Russia’s invasion of Ukraine and the cyberwarfare efforts accompanying it. He mentioned that banks, authorities, agencies, telecoms firms, companies and other significant targets in Ukraine had fallen victim to denial-of-service or malware attacks. At the same time, Ukraine’s critical infrastructure had not been paralyzed despite the massive attacks.

Tom Burt, who oversees Microsoft’s investigations into big complex cyberattacks, commented the Russian cyberattacks by saying: “They brought destructive efforts, they brought espionage efforts, they brought all their best actors to focus on this.” He added that the Ukrainian defenders were able to thwart some of the attacks, as they had become accustomed to fending off Russian hackers after years of online intrusions in Ukraine. He praised the Ukrainian cyber defence: “They’ve been doing a good job, both defending against the cyberattacks and recovering from them when they are successful.” [3]

The conclusion from this is that the main reason why we have not seen too many devastating effects of Russia’s cyberwarfare in Ukraine seems to be that Ukrainians were defending well.

Apart from successfully defending against Russian cyber-attacks, Ukraine received effective support from Belarusian hackers.

Counter-attacks in Belarus

The first setback Russia suffered in the cyberwar against Ukraine already happened before the invasion began. A hacktivist group of exiled Belarus tech professionals called Cyber Partisans, who had been fighting against the regime of the autocratic Belarusian president Alexander Lukashenko for years, became active at the first signs of the Russian military buildup at the border to Ukraine. The Cyber Partisans attacked the Belarusian train system, which has been important for moving Russian soldiers, tanks, heavy weapons and other military equipment to the Ukrainian border. They exploited security holes in the more than two decades old Windows XP operating system on which large parts of the IT infrastructure of the Belarusian train system have been based.

In collaboration with Belarusian railroad workers and dissident Belarusian security forces, the Cyber Partisans managed to slow down Russian troop movements and supplies. This contributed to the logistical chaos of the Russian armed forces in the first weeks of the war, which left Russian troops stranded on the front lines without food, fuel and ammunition. In this way, the cyber sabotage of Russian logistics supported Ukraine’s successful military resistance against the Russian armed forces in the Ukrainian capital Kyiv and other cities in the north of the country.

Russia’s cyberattacks on Western countries

Ukraine is by no means the only target of Russian cyberattacks. Cyberwarfare by Russia against Western countries has a long history. [4] The most prominent event was the cyberattack on Estonia in April 2007. Although it had never been proven that the Russian government was behind it, the trail clearly led to Russia. Since the Russian annexation of Crimea in 2014, cyberattacks against Western countries like Germany, France, Poland, the UK, and the US have increased in both intensity and scope.

Cyberwarfare by Russia has included a plethora of different activities, from hacker attacks to disinformation. There are indications that Russia interfered through disinformation and other measures with the Brexit vote in the UK and the US presidential election in 2016.

NATO’s cyber defence

Since 2008, NATO has been building up its cyber defence, in response to growing cyberthreats by countries like Russia, China, and North Korea. A year after the cyberattack on Estonia, NATO founded the Cooperative Cyber defence Centre of Excellence in Tallinn. At the 2014 NATO Summit in Wales, after the Russian annexation of Crimea, NATO adopted an enhanced policy and action plan. It established cyber defence as part of the Alliance’s core task of collective defense and set out to further develop NATO’s cyber defence capabilities in collaboration with industry. [5]

Conclusion

At the time of writing, it is not clear, when and how the Russian war against Ukraine will end, and what types and levels of cyberwarfare will occur in within this conflict. What appears certain is that cyberwarfare has become a standard element of conflict between nations, which has expanded the arsenal of hybrid warfare. Challenges for maintaining cybersecurity will subsequently increase, making both the physical world and the virtual world a less safe place. Significant investments in cyber defence and cybersecurity will be needed on all levels, in order to ensure security and resilience of Western democratic societies.

References

[1] Cyber-attacks on Ukraine are conspicuous by their absence, The Economist, 1 March 2022 – https://www.economist.com/europe/2022/03/01/cyber-attacks-on-ukraine-are-conspicuous-by-their-absence

[2] Elizabeth Gibney, Where is Russia’s cyberwar? Researchers decipher its strategy, Nature, 17 March/ 18 March 2022 – https://www.nature.com/articles/d41586-022-00753-9

[3] Preston Gralla, Russia is losing the cyberwar against Ukraine, too, Computerworld, 2 May 2022 – https://www.computerworld.com/article/3658951/russia-is-losing-the-cyberwar-against-ukraine-too.html

[4] Cyberwarfare by Russia, Wikipedia – https://en.wikipedia.org/wiki/Cyberwarfare_by_Russia

[5] Cyber defence, article on NATO website, 23 March 2022 – https://www.nato.int/cps/en/natohq/topics_78170.htm

 

Invented by DABUS

Roundtables about uses cases and network slicing

Milon Gupta
Eurescom

Johannes Gutenberg invented the movable metal-type printing process, Benjamin Franklin invented the lightning rod, and DABUS invented a beverage container. While the first two claims are widely accepted, the third claim has been the cause of a fundamental controversy on who can be an inventor. That is because DABUS is not a human being, but an artificial intelligence machine. And in conventional thinking, a machine cannot be an inventor, only a tool used by a human inventor.

According to the Wikipedia entry for “Inventor”, the matter is clear: “An inventor is a person who creates or discovers new methods, means, or devices for performing a task.” Ryan Abbott, a law professor at University of Surrey, has been challenging this common notion since 2013. He rejects that only a person can be an inventor and claims that an AI machine could be an inventor as well. “We’re moving into a new paradigm where not only do people invent, people build artificial intelligence that can invent,” said Abbott, who authored in 2020 a book with the title “The Reasonable Robot: Artificial Intelligence and the Law.”

The Artificial Intelligence Project

According to Abbott, corporations are unwilling to push the issue of AI inventions, if it means not being able to obtain legal protection for their products. Thus, he set up the Artificial Intelligence Project [1] and teamed up with Stephen Thaler, founder of Imagination Engines Inc., to build a machine whose main purpose is to invent. The result was DABUS, an AI machine that “invented” not only the aforementioned beverage container, but also a device for attracting enhanced attention. Abbott and a group of volunteering lawyers filed patent applications for these inventions in 17 jurisdictions listing DABUS as the inventor.


© Adobe Stock

Unsuccessful patent applications

The quest of Abbott and his team to put man and machine on an equal footing under international patent law has been overwhelmingly met by a negative response from patent offices all over the world. As of November 2021, the patent application is pending in 11 countries. In the US, Europe, Germany, the UK, and Australia, the patent application has been rejected, and appeals are pending.

The European Patent Office (EPO) and the UK Intellectual Property Office (UKIPO), for example, came to similar conclusions: they denied the patent applications on the grounds that an AI system cannot be listed as an inventor. The European Patent Convention and the UK Patents Act, which were the basis for the respective decisions, both require an inventor to be a named person. The same requirement is valid under the U.S. Patent Act.

The first patent for an AI machine

Despite the rejection by almost all patent offices, Abbott and his team finally had reason to celebrate a victory in July 2021: The Companies and Intellectual Property Commission (CIPC), an agency of the South African Department of Trade and Industry, granted a patent to the applicant Stephen L. Thaler and the inventor DABUS for a “Food Container and Devices and Methods for Attracting Enhanced Attention“, with the note: “The invention was autonomously generated by an artificial intelligence” [2]. That has made South Africa the first, and, so far, the only country to grant a patent to an AI inventor. One of the reasons for this result could be that the term “inventor” is not defined in South African patent law.

The distinction between owner
and inventor

In patent law, there is the distinction between the owner of an invention and the inventor. Depending on the jurisdiction in different countries, this distinction is important. The owner of the patent is usually the one who has the right to exploit it. Nonetheless, at least one name of an inventor has to be provided, otherwise the patent application gets rejected.

And this is exactly where current patent laws fall short, because Thaler did not invent the beverage container, it was DABUS, the AI machine he had built. If he had given his own name as inventor, more patent offices might have accepted his application.

Conclusion

The case of DABUS shows that current intellectual property and patent laws, which usually have been written decades ago, are getting increasingly out of sync with a fast-evolving technology landscape. The expected progress of artificial intelligence in all areas of life should sooner or later lead to a reconsideration of legal concepts regarding inventorship. Who knows, the next breakthrough invention may not be generated by an ingenious scientist of flesh and blood, but rather by an advanced AI machine.

References:
[1] Artificial Intelligence Project website – https://artificialinventor.com
[2] The patent for DABUS is registered in South Africa under the patent application number 2021/03242, application date: 13/05/2021, CIPC Patent Journal, July 2021, Vol 54, No. 07, Part II of II, 28 July 2021, page 255, URL: https://iponline.cipc.co.za/Publications/PublishedJournals/E_Journal_July%202021%20Part%202.pdf

5G 5G-VINNI

The dark side of data

How data garbage hurts business and the environment

Milon Gupta
Eurescom

Everyone is talking about big data. There is indeed a large potential for extracting economic and societal value out of huge amounts of data. By feeding algorithms with data, machine learning could provide solutions to almost everything. So much about the bright side. However, in the shadows of the big data vision lurks a less pleasant reality: huge piles of data garbage, gazillions of data files lingering unused on servers around the world – dark data.

According to the “Databerg Report” published by information solution provider Veritas in 2015, organisations in Europe, the Middle East and Africa hold on average 14% of identifiable business critical data, 32% ROT (redundant, obsolete and trivial) data, and 54% dark data.

According to market research firm Gartner, dark data is defined as “the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes.” These other, more productive purposes could be, for example, analytics, business relationships and direct monetisation.

How dark data is produced

The critical question is, how dark data comes into existence in the first place. There are various causes and reasons. One of the underlying enablers of dark data is that data storage is seemingly cheap and abundant. Thus, all data that could possibly be useful is stored, whether they are actually used or not. And once data is stored, there is usually nobody who cares about checking and reducing data amounts.

On the production side, there are many contributors. Organisations often retain dark data for compliance purposes only. That is ironic, as in some cases storing data could cause bigger compliance risk than benefits, just think of private data and the risks of violating data privacy regulations.

While in the past, dark data was mainly produced by humans, nowadays the biggest share of dark data is produced by machines, including information gathered by sensors and telematics. According to an estimate by IBM from 2015, roughly 90 % of data generated by sensors and analogue-to-digital conversions never get used. It is doubtful, whether this has improved in the last six years. I wouldn’t be surprised, if it is even worse now.

Some organisations seem to believe that dark data could be useful to them in the future, once they have acquired better analytic and business intelligence technology to process the information. While this is theoretically possible, in practice I find it hard to believe that a lot of value will be generated in ten years from analysing dark data generated by humans and mostly machines today. Even if a small amount of today’s dark data could be pure gold in ten years’ time, the question is, whether it would be worth the problems dark data already creates today.

Why dark data is a problem

Given that cloud storage is cheap, the question is, why dark data should be a problem at all. The answer is in the huge scale of dark data. Once the amount of dark data exceeds a certain level, storage cost is no longer cheap. The “Databerg Report” from 2015 predicted that dark data could cause 891 billion dollars of avoidable storage and management costs by 2020, if left unchecked. I have not seen any recent study on the amount and cost of dark data. However, I have a strong suspicion that the real cost might be even higher today.

As storing huge amounts of data consumes a lot of energy and material for the data centre infrastructure, there is not just a financial cost, but also an environmental cost in the shape of carbon-dioxide emissions.

One of the reasons why the problem persists and might actually grow over the coming years is that most companies probably have no idea about the volume and cost of dark data.

What can be done about dark data

In my view an important part of the solution can be derived from a famous quote by Lord Kelvin: “If you can not measure it, you can not improve it.” Applying these words of wisdom to dark data, you could say: if you can measure dark data, you can remove it. Even if removing dark data is not always the preferred solution, for example because of compliance needs or expected value to be derived in the future, it would be a good start to be aware of the scope of the problem and to know, which data on an organisation’s server is dark. Maybe the machines that increasingly generate dark data could also help to remedy the problem through the use of machine learning in weeding out useless data.


© AdobeStock

Excel accidents

The economic and social risks of spreadsheet errors

Milon Gupta
Eurescom

Whenever executives think of business risks, they usually consider well-known factors like competitors, compliance, and cybercrime, at the moment also COVID-19. However, there is a less obvious, yet potent risk at their fingertips – their trusted spreadsheet programme, which in most cases is Microsoft Excel. It is not just the quirks of the software itself, but rather the way business people use it that leads to trouble.

When Microsoft released the first version of Excel for the Macintosh in 1985 and two years later for Windows, nobody could have guessed how ubiquitous the use of this spreadsheet software would become. Already by the early 1990s, Excel had gained a dominating market position against its toughest competitor at the time, Lotus 1-2-3. It did not take long until the calculations and formulas in cells, rows, and columns led several users astray, causing a never-ending series of spreadsheet errors with sometimes spectacularly disastrous consequences.

History of errors

IT professor Raymond R. Panko from the University of Hawai’i has investigated spreadsheet errors for the last three decades and has come to devastating conclusions: almost 90 percent of all spreadsheets have errors. And even the most carefully edited spreadsheets have errors in one percent of all formula cells. This means that in larger spreadsheets with thousands of formulas there are dozens of errors.

While this in itself may not yet sound shocking, the implications of spreadsheet errors definitely are scary. Nearly one out of five large companies has suffered financial losses due to spreadsheet errors. Typically these errors are caused by a combination of human mistakes and the complexity of large spreadsheets, which provide plenty of opportunity to go wrong. Most of these mistakes do not have an economic impact, but some do. And sometimes, the damage is huge, as the following examples show.

US photographic product company Eastman Kodak had to restate financial results for two quarters by combined 15 million dollars because of an erroneous spreadsheet. It had miscalculated the severance and pension-related termination benefits accrued by one employee.

JPMorgan Chase, the largest bank in the United States, made a wrong credit portfolio assessment based on several faulty equations in a spreadsheet, which cost them approximately 6.5 billion dollars in losses and fines. US mortgage-loan company Fannie Mae had to restate its 2003 third-quarter financials due to a 1.1 billion dollar spreadsheet error, which was due to the flawed implementation of a new accounting standard.

Lost COVID-19 test results in England

While the economic damage by erroneous spreadsheets of businesses large and small is already huge, there is also a high social price to be paid for spreadsheet errors in the public sector. One of the most recent cases involved the loss of COVID-19 test results in England. As BBC News reported in early October 2020, the stunning number of 16,000 coronavirus cases went unreported in England, due to a flawed Excel template by Public Health England (PHE), an executive agency of the UK Department of Health and Social Care. The problem was caused by the way PHE assembled logs produced by commercial firms paid to analyse swab tests of the public, to discover who has the virus.

The firms recorded their results correctly in CSV files, text-based lists that can be processed by Excel. PHE had set up an automatic process to pull these data together into Excel templates for upload to a central system. The problem was that the developers at PHE had chosen an old file format for the templates, the XLS format, instead of the current XLSX format. As a consequence, each XLS-based template could handle only about 65,000 rows of data instead of the one million-plus rows that the XLSX format is capable of. And as each test result created several rows of data, this meant that each template was limited to about 1,400 cases. When that total was reached, further cases were cut off.

How to contain spreadsheet errors

Efforts to understand and fight the problem of spreadsheet errors go back more than two decades. Already in 1999, a group of British spreadsheet researchers from the University of Greenwich, the University of Wales Institute Cardiff and Her Majesty’s Customs and Excise joined forces to create the European Spreadsheet Risks Interest Group (EuSpRIG), which is dedicated to the art of spreadsheet risk management. EuSpRIG claims to be the largest source of information on implementable methods for processes and methods to inventory, test, fix, document, backup, archive, compare and control the legions of spreadsheets that support critical corporate infrastructures. EuSpRIG runs an annual conference which provides a forum for researchers, practitioners, trainers, vendors, consultants, regulators and auditors to discuss the latest developments in spreadsheet risk management.

Despite the efforts of EuSpRIG and others to reduce the occurrence and damage of spreadsheet errors, the problem seems to persist. In some cases, like the handling of COVID-19 test results in England, the solution may not be to improve the spreadsheet or to better educate users in the proper use of Excel. Instead, it might be better in such cases to rather get rid of spreadsheets altogether and handle large amounts of data in databases that ensure the consistency and integrity of the processed data.


© AdobeStock

Clueless users and tricky surroundings

Clueless users and tricky surroundings

Online meetings from the home office

Milon Gupta
Eurescom
gupta(at)eurescom.eu

The COVID-19 pandemic has changed the way office workers work. One of the major changes for many of them has been that they are no longer office workers but home office workers. Instead of having daily in-person meetings, they were forced to have online meetings. Apart from, at least temporarily, changing the communication culture in a number of organisations, the sudden move from physical to virtual meeting brought about a number of unwanted side effects.

There are two factors contributing to the unwanted side effects of online meetings from the home office: user experience, or rather inexperience, and the different environment. In most cases, these two factors reinforce each other. If this sounds too abstract, let us have a look at a few examples for each factor.

Clueless users

Let me start with some personal experiences from recent online meetings of European research projects in the ICT domain. You would expect technology-savvy researchers from the ICT field, who have had hundreds of online meetings already before the coronavirus lockdown, to be proficient in the use of web-conferencing systems. While this may be true for the majority of them, there is at least one ignorant participant in every call who creates minor or major disruptions.

My experience in ICT project calls is that most participants switch off their webcams, unlike users in most other domains. While this certainly reduces already one channel for unwanted side effects, it still leaves audio. And that can be really disruptive. Like the participant in one of my online project meetings who received a call on his mobile phone. The reason I know this is that his microphone was not muted, and I heard every word he was saying – unfortunately I could not hear the official speaker in the meeting anymore, as his audio volume was a bit low. Appeals to the ignorant talker to close his mic were of no avail – he was fully absorbed in his other conversation, which seemed to be much more interesting than our online meeting. Remember that this is an example from a group of experienced users. It is getting more interesting, if you add inexperience. The following examples are second-hand, but I believe they are true.

Let us stick to the audio channel for this one. BBC News quotes Neil Henderson from Zurich Insurance, who had a call with a client, who was obviously in the bath, as he could hear splashing and the tap running. When the client realised that the microphone was on, the phone slipped into the bath. Then he (the client, not Mr Henderson) jumped out of the bath to get another phone, slid and fell.

If you think this little audio drama was exciting, remember that video offers many more creative opportunities for clueless users to entertain their less creative peers. One example I remember from a recent online project meeting was a participant, who seemed to be oblivious that the webcam was on. He stood up and came back with a sandwich, which he slowly ate in a disgusting manner. It does not sound so bad when you read it, but it was quite disturbing to watch.

Even more unsettling was a woman from the US, who did something really embarrassing in a video conference call – she accidentally left her camera on while going to the toilet, watched in disbelief by her stunned colleagues. How do I know about this? The video went viral on Twitter.

Let us now have a look at the other factor, the environment in the home office.

Tricky surroundings

Already at the office you can have numerous audio-visual distractions that could affect your online meeting. However, even a noisy office environment is like the cave of a reclusive Zen monk in comparison to the audio-visual horrors that many home office environments generate. The worst I personally experienced was an inconsolably crying baby at the home office of a female participant who had not muted her mic.

On the visual side there are reports about life partners visibly passing by at the back of the room – completely naked! Even for those who enjoy the occasional diversion within the hours of looking at boring slides and faces, it may affect focus and productivity – not to speak of the embarrassment of the person in whose home the diversion happens. And while most humans in a household can be educated to display socially responsible behaviour when the webcam is on, there are also cats and dogs that have been reported to interfere with online meetings by making noises or jumping in front of the camera.

In conclusion, I see two paths for the evolution of online meetings at the home office. Scenario one: home office workers update their skills and design their surroundings and technical setup to get closer to an office environment. Scenario two: neither user behaviour nor home office surroundings significantly improve. Instead, the tolerance of online meeting participants will increase the more predictable disruptions like naked spouses and farting dogs become. Time will tell which scenario will dominate.


© AdobeStock

Face protection

How to escape facial recognition

Milon Gupta
Eurescom
gupta(at)eurescom.eu

Facial recognition is becoming ubiquitous. That is great news for marketers, policemen and dictators. Privacy-conscious citizens, however, are not amused. They do not relish the prospect of living in a surveillance society, where authorities and tech giants can monitor every step they make. Are there ways to defy surveillance and escape facial recognition? A few innovators have taken up the challenge.


© Adobe Stock

The road to ubiquitous surveillance

Over the past decade, dozens of databases of people’s faces have been compiled globally by companies and researchers. Many of these images are shared around the world, thus spreading the use and effectiveness of facial recognition technology. The databases are built on images from social networks, photo websites, dating services, and cameras placed in restaurants and on campuses.

Facial recognition is already commonplace in China: police scan public spaces for suspects, consumers pay their shopping with their faces, and taxes are paid by face as well. Chinese unicorn start-ups like Megvii, SenseTime, CloudWalk, and Yitu are providing solutions, which contribute to the Chinese government’s goal of becoming global leader in Artificial Intelligence. The solutions are used by the Chinese government to establish complete surveillance of all its citizens.

Although China may be most advanced in the size and scope of using facial recognition, US tech giants like Google, Facebook and Microsoft are pushing the deployment of this technology as well – and so is the US government. US Immigration and Customs Enforcement officials have employed facial recognition technology to scan motorists’ photos to identify undocumented immigrants. And the FBI has used such systems for more than a decade to compare driver’s licenses and visa photos with the faces of suspected criminals, according to a Government Accountability Office report.

Many other countries are quickly adopting facial recognition technology to identify their citizens. In Europe, France is to become the first European country to use facial recognition technology for identifying citizens. The French government is planning to incorporate facial recognition technology into a mandatory digital identity for its citizens.

In November France was to roll out an ID programme called Alicem, an acronym for “certified online authentification on mobile”. The Alicem app reads the chip on an electronic passport and checks its biometric photo with the mobile phone user via facial recognition to validate the identity. Once confirmed, the user can access a host of public services without further checks.

France’s data regulator, CNIL, has warned that the programme breaches the EU’s legal requirement of consent, because it provides no alternatives to facial recognition to access certain services. In addition, there are concerns over data security, as an allegedly secure French government messaging app was hacked earlier in 2019. Sooner or later, it will be hard to find a spot on Earth, where your face is not recognised. More importantly, our faces will be increasingly used as identifiers to withdraw money or pass border controls. This increases the incentives for hackers to steal your digital face and get access to your money and more.

Blocking facial recognition

Are we completely defenceless against facial recognition? An Israeli start-up says ‘No’. The startup called D-ID claims to have developed a new solution which blocks facial recognition. Current solutions like eyeglasses that reflect light to jam cameras or camouflaging your face through make-up and fancy headgear are of limited use for not being recognised. Thus, D-ID has gone in a different direction: they replace human faces with computer-generated faces. The modifications are just enough to escape detection by facial recognition algorithms. If you put the original photo and the manipulated image side by side, the changes are noticeable, but on its own the altered picture appears normal. Their solution, called ‘Smart Anonymization’ can be used for videos and still images.

‘Smart Anonymization’ removes facial images without processing or profiling the subject. It then replaces the images with AI-generated, photorealistic faces of non-existent people. D-ID claims that these anonymised faces make the technology far superior compared to legacy solutions which rely on blurring or pixelation. The anonymized faces preserve key non-identifying attributes of the original face including age, gender, expression, gaze direction and more. According to D-ID, this allows for analytics to be performed while respecting privacy laws and regulations.


© Adobe Stock

Alternatives and open questions

As elegant as D-ID’s solution appears, it is not a panacea for escaping facial recognition. First of all, many people already have unaltered photos of themselves on the Web, which have already been stored and processed in the databases of tech giants and authorities. Second, it is not unlikely that AI-powered facial recognition systems will further advance and either link anonymized faces to real faces in the database or at least mark the manipulated photos as such and deny access.

Recently, a woman from the Chinese city of Wenzhou found out that after plastic surgery, her access to all kinds of services that require facial recognition was blocked. Among others, access to payment services and online shops was denied, because the systems could not identify her anymore. Her doctor recommended that she should just register again on the central identification system. I have no information how that worked out, but at least it meant a lot of hassle for the woman.

What alternatives are there? Some designers have been very imaginative in designing anti-surveillance hijabs, fashionable camouflage, photo-realistic, 3D-printed face masks, and seemingly random patterns printed on shirts to dazzle computer algorithms.

Outside of China, you might get away with such fancy trickery. However, the moment you go through security at an airport, your camouflage will only get you in trouble – instead of an AI system, a flesh-and-blood police officer will identify you.

Does this mean there is no escape from facial recognition and a surveillance society like in China? Not necessarily. Even if technical means may be too limited to fool facial recognition systems, democratic societies offer more potent means to stop tech giants and authorities from spying on us – public debate and legislation. It may take more time than donning a face mask, but it could be more effective in the long run.

Further information: Website of Israeli start-up D-ID – https://www.deidentification.co

Copyright © 2024 by Eurescom

 
  • Corporate Information
  • Data Protection Declaration
  • Terms of Use
  • Corporate Information
  • Data Protection Declaration
  • Terms of Use