Q1. What was your company’s unique approach in integrating technology to achieve UN’s Sustainable Development Goal (SDG)?
AI regulation is necessary but not sufficient to address the sustainability crisis brought by the disruptive technology. We live in a hyper-connected world and we need to protect consumers. A truly consumer-centric approach is about making risks visible and predictable. Consumers have the right to know their risks. Moreover, industries are getting more complex with multiple services and vendors working together. We need ways to manage these complex value chains with certainty. Finally, governments need to play their role by stepping in and regulating critical areas of technology application. We need a radically different approach to transparency that brings common language to ethical computing.
As a global non-profit open-source inclusive, Open Ethics mission is to engage citizens, legislators, engineers, and subject-matter experts into a transparent design and deployment of solutions backed by artificial intelligence to make a positive societal impact. Companies and individuals believing in our mission could share the manifesto to express their beliefs (https://openethics.ai/manifesto/). In our work we focus on the three key areas that need attention:
1) building an inclusive dialog, by inviting citizens and experts to discuss and create standards for transparency and explainability;
2) developing tools and frameworks to estimate, design, and govern the impact of artificial intelligence and autonomous systems in general on a role-by-role basis;
3) a disclosure label, the bottom-up approach to disclosing modes of operation of such systems in a standardized, user-friendly, and explicit way.
The pillars of our technology vision are described here (https://openethics.ai/vision/)
Q2. What are some examples of SDG-focused projects that your company is currently working on?
The Open Ethics initiative runs and hosts projects important for the transparency ecosystem. As a non-profit initiative, our goal is to provide a home where projects, especially in artificial intelligence, machine learning, and autonomous decision-making domain, can build and support a community of contributors. Examples of such projects are:
“Open Ethics Label”, providing information about product features and data processing risks to consumers in a human-centric way, using a set of recognizable visual labels and clear descriptions;
“The Open Ethics Canvas”, a free tool for designers, developers, product owners, and ethics professionals to be used in teams as an instrument to kick-start the conversation about building transparent and explainable technology products;
“Open Ethics Data Passport”, an electronic snapshot describing dataset acquisition and annotation practices aimed to deliver built-in transparency and allow spotting of the systemic bias in trained AI models,
“Public Surveillance Transparency Project”, where we suggest instruments to enable transparency in the case of public surveillance. The project concerns oversight and transparency of surveillance processes put in place at a given physical location.
Q3. What are the most difficult challenges your company and other companies face generally in the implementation/adoption of new sustainable technology?
Self-disclosure (or disclosure) has evolved greatly in the food industry as we got used to the food nutrition labels and nutrition tables. Proteins, fats, carbs, expiry dates, allergens, etc., are now our best friends at the shelves of a grocery store. What if we could disclose product risk in the same manner as we do for food (or electrical products, or construction materials). It’s good to have a top-down regulation and make sure that our digital “diet” is safe. At the same time, one size fits all doesn’t allow individual preferences. Self-disclosure allows individual choices which enable bottom-up regulatory mechanisms.
Self-disclosure, thus, is essential for democratic societies. However, as it happened in the other industries, every vendor may decide to disclose in their own manner. Complexities of the technical language and variability in the marketing wording make it hard for consumers to navigate. Obviously, we need standards. One possible way is to agree on one single standard, specifying what vendors should do to allow calling their product “Ethical AI” (OR Trustworthy AI, OR Sustainable AI, OR Responsible AI, OR Best-in-the-universe AI).
Knowing that we, as Humanity with the big H, for the last 2000 years haven’t reached an agreement on what “Ethical” even constitutes, perhaps expectations are way too high. Diversity is a good thing, after all. The choice is a good thing. In a similar way how the ordinary consumer today pushes brands to demonstrate more sustainability efforts, tomorrow the new generation of consumers will demand algorithmic transparency. And this generation will get what they demand. At Open Ethics we believe that we should make it possible for consumers to create their own digital “diet” and become selective in their risk choices. To adopt standards we need education. Educating a large group of consumers to gain a critical mass of those who understand and care is a larger objective that is in front of governments and the industry.
Q4. Tell me about a time your sustainable tech helped another company realize their SDG goals.
In a workshop we put in place with Montreal AI Ethics Institute (MAIEI) we worked with six technology companies on their AI products for the Transparency Protocol. Each company would generate the Transparency Label, by filling in the self-disclosure in front of the audience. One of the companies was Zoimeet, represented by Nick Yap, co-founder, and CTO of the company. Zoimeet develops data privacy compliant speech recognition and analysis technology. They build conversational AI solutions, as well as speech models for organizations. Zoimeet is a company that advocates for AI privacy and has a pending patent built on data privacy compliance and trust.
By taking part in the workshop and generating the transparency label, Zoimeet became part of the inclusive dialogue towards ethical AI and a proponent of transparency and explainability. The workshop allowed the public to understand why transparency is a critical need for creating a safer digital space, as well as for them to become informed end-consumers who can make an educated choice based on the products of the companies they choose to use, such as Zooimeet, and as an end result, it led to the enhancement of trust and credibility for the companies who have completed the self-disclosure.
Q5. What is the biggest challenge your company has handled while enabling your sustainable tech accessible to different communities?
We believe that every guideline or standard should be inclusive. As oxymoron as it sounds – inclusive and restricting, standards are meant to raise the bar and to bring processes under the above the minimal common denominator. While we can’t demand ethical standards as the common denominator in the broad societal view will be too small, we can make sure we try hard to push for structured transparency and to consult with the wide audience of those affected by the technology to form participatory dialogs. These dialogs require both experts and citizens to work together. Forming these communities is exciting and challenging these days as it requires overcoming the digital divide and coping with self-sustaining informational bubbles.
Q6. Cost-effective sustainable tech can be lifesaving and planet-saving approach. What actions your company takes to make your sustainable tech economical and a fit for the large-scale adoption?
As a non-profit initiative, we’re striving to operate in the leanest possible way. Apart from our involvement in the bottom-up regulation to complement the governmental top-down approach, we also believe that choices to use one or another software could be modulated by the consumer interest in sustainability and wise use of energy resources. It’s not a secret that a key factor in reducing the CO2 emissions associated with data-intensive computing (such as ones used in training AI models or operating blockchain infrastructure) comes from the greater efficiency of computing resources, including the hardware and the software itself. The Algorithmic Benchmark Sustainability Score (or ABSS), a project currently in the early prototyping phase aims to bring transparent information about energy consumption patterns of different algorithms based on their benchmarked runtime profiling.
Q7. What do you believe will be a global, long-term, impact of your sustainable tech integration?
The Open Ethics Transparency Protocol (OETP) describes the creation and exchange of voluntary ethics disclosures for IT products across the supply chain. The Protocol describes how disclosures for data collection and data processing practice are formed, stored, validated, and exchanged in a standardized and open format. We see multiple possible ramifications allowing for societies to gain control over their digital diets – directly or with the help of the regulator accessing the information aggregated by Open Ethics:
- Informed consumer choices: End-users are able to make informed choices based on their own ethical preferences and product disclosure.
- Industrial-scale monitoring: Discovery of best and worst practices within market verticals, technology stacks, and product value offerings.
- Legally-agnostic guidelines: Suggestions for developers and product-owners, formulated in factual language, which are legally agnostic and could be easily transformed into product requirements and safeguards.
- Iterative improvement: Digital products, specifically, the ones powered by artificial intelligence could receive nearly real-time feedback on how their performance and ethical posture could be improved to cover security, privacy, diversity, fairness, power balance, non-discrimination, and other requirements.
- Labeling and certification: Mapping to existing and future regulatory initiatives and standards.
Q8. What’s your vision for the sustainable tech industry and your company’s role in it?
The sustainable tech industry has multiple facets, and targets different values across the value chain – from ecology to human rights, from transparency to inclusion. In the software industry, many hopes have been put in the idea of open-source. Open-source is a beautiful culture, but is it futile for AI-driven products?
No. It needs a new definition. For the economy to prosper and to bring benefits to the underprivileged, we need to make sure that the open-source concept is redefined for the AI-driven world. In this world, we may benefit largely from openly documenting our practices both for the developers as well as for the consumers. The developer community can use and reuse the common knowledge and therefore “stand on the shoulders of giants.”
The consumer community, following the best practices from other industries, should be able to access information about how these systems are built and how automated decisions are made. Open Access. Open Data. Open Source. And, finally, transparency in how values are respected, Open Ethics.
Nikita Lukianets
Founder,
Open Ethics.