Tag Archive : AI

AI recent strategy

We are already seeing governments essentially outsource their roles as regulators, leaving matters to self-regulation on an increasing scale. European governments, for example, after creating the right to be forgotten on search engines that pre-dated GDPR, left the task of enforcing this right to search engines themselves. The reason? Governments lacked the technological competence, resources, and political coherence to do so themselves.

regulatory market is a new solution to the problem of the limited capacity of traditional regulatory agencies, invented for the nation-state manufacturing age, to keep up with the global digital age.

It combines the incentives that markets create to invent more effective and less burdensome ways to provide a service with hard government oversight to ensure that whatever the regulatory market produces, it satisfies the goals and targets set by democratic governments.

So, instead of governments writing detailed rules, governments instead set the goals: What accident rates are acceptable in self-driving cars? What amount of leakage from a confidential data set is too much? What factors must be excluded from an algorithmic decision?

Then, instead of tech companies deciding for themselves how they will meet those goals, the job is taken on by independent companies that move into the regulatory space, incentivized to invent streamlined ways to achieve government-set goals.

This might involve doing big data analysis to identify the real risk factors for accidents in self-driving cars, using machine learning to detect money laundering transactions more effectively than current methods, or building apps that detect when another app is violating its own privacy policies.

Independent private regulators would compete to provide the regulatory services tech companies are required by government to purchase.

How does this not become a race to the bottom, with private regulators trying to outbid each other to be as lenient as possible, the way that continued self-regulation might?

The answer is for governments to shift their oversight to regulating the regulators. A private regulator would require a license to compete, and could only get and maintain that license if it continues to demonstrate it is achieving the required goals.

The wisdom of the approach rests on this hard government oversight; private regulators have to fear losing their license if they cheat, get hijacked by the tech companies they regulate, or simply do a bad job.

The failure of government oversight is, of course, the challenge that got us here in the first place, as in the case of Boeing self-regulating safety standards on the ill-fated 737 Max.

But the government oversight challenge in a regulatory market will often be easier to solve than in the traditional setting by having fewer regulators than tech companies, an incentive for regulators to maintain their license, and industry-wide data.

And because regulators could operate on a global scale, seeking licenses from multiple governments, they would be less likely to respond to the interests of a handful of domestic companies when they are at risk of losing their ability to operate around the world.

This approach may not solve every regulatory challenge or be appropriate in every context. But it could transform AI regulation into a more manageable and transparent problem.

HUMANS ARE KEY TO COMBATING BIAS IN AI

Artificial intelligence (AI) was once the stuff of science fiction. Today, however, it is woven into our everyday experiences in the form of chatbots, voice assistants and even Google Maps. In fact, according to Statista, 84% of global business organisations now believe that AI will give them a competitive advantage.

AI may be fairly commonplace (at least at a simplistic level), but developing it to maturity is proving more elusive. Training a machine to learn, respond and act like a human takes massive amounts of data inputs across countless scenarios.

Managing this process alone is tough for organisations as they face many potential issues. The most common, and potentially the most dangerous, is the issue of biased data. If an organisation plans to excel with AI, combating this bias should be its number one priority. Otherwise, the company risks the algorithm delivering inaccurate results and potentially alienating large portions of customers.

The first step to tackling this problem is to understand how algorithms become biased in the first place. Every developer (and every person, for that matter) has conscious and unconscious biases that feed into their initial development, and because an algorithm is only as smart as the data used to train it, this can set a dangerous precedent. Bad data has the potential to cause biased AI to make decisions that actively harm people and populations. But while humans are the root of all bias, they are also the key to removing it.

Today’s consumers want AI to be more natural and more human, but to achieve this the data that goes into the algorithms must be more representative of the real world.

Collecting diversified training data at scale from real people is the way to do this. Using a vetted global community that covers numerous countries, ages, genders, races, cultures, political affiliations, ideologies, socioeconomic and education levels, and more, allows organisations to validate that their algorithms are producing accurate, human-like and truly useful results. This applies to sourcing the baseline sets of training data, and to the ongoing collection of data, so it is advisable to introduce a structure which allows for continual feedback and modification.

It may be that some users report difficulties with certain aspects of the product, for example voice or facial recognition, and this feedback could then be incorporated into the next version of the algorithm to improve it for future users.

The reality is that despite limitless technical implementations, AI can only ever be as good as the humans who programme it. This raises considerable issues when we factor in all of the intentional and unintentional biases that each person carries. To some extent bias will always exist within artificial intelligence, but by collecting real human interactions before release, businesses can train their algorithm and achieve results that provide real value to their customers.

We are reaching a point where AI has begun to influence the decisions which govern the individual and collective future of our society, so it is vital that the companies developing these algorithms take an active role in making AI more reflective of society, and fairer for all.

AI helping medicine

Artificial intelligence has been used for the first time to instantly and accurately measure blood flow, in a study led by UCL and Barts Health NHS Trust.

The results were found to be able to predict chances of death, heart attack and stroke, and can be used by doctors to help recommend treatments which could improve a patient’s blood flow.

Heart disease is the leading global cause of death and illness. Reduced blood flow, which is often treatable, is a common symptom of many heart conditions. International guidelines therefore recommend a number of assessments to measure a patient’s blood flow, but many are invasive and carry a risk.

Non-invasive blood flow assessments are available, including Cardiovascular Magnetic Resonance (CMR) imaging, but up until now, the scan images have been incredibly difficult to analyse in a manner precise enough to deliver a prognosis or recommend treatment.

In the largest study of its kind, funded by British Heart Foundation and published in the journal Circulation, researchers took routine CMR scans from more than 1,000 patients attending St Bartholomew’s Hospital and the Royal Free Hospital and used a new automated artificial intelligence technique to analyse the images. By doing this, the teams were able to precisely and instantaneously quantify the blood flow to the heart muscle and deliver the measurements to the medical teams treating the patients.

By comparing the AI-generated blood flow results with the health outcomes of each patient, the team found that the patients with reduced blood flow were more likely to have adverse health outcomes including death, heart attack, stroke and heart failure.

The AI technique was therefore shown for the first time to be able to predict which patients might die or suffer major adverse events, better than a doctor could on their own with traditional approaches.

Professor James Moon (UCL Institute of Cardiovascular Science and Barts Health NHS Trust) said: “Artificial intelligence is moving out of the computer labs and into the real world of healthcare, carrying out some tasks better than doctors could do alone. We have tried to measure blood flow manually before, but it is tedious and time-consuming, taking doctors away from where they are needed most, with their patients.”

Dr Kristopher Knott (UCL Institute of Cardiovascular Science and Barts Health NHS Trust) added: “The predictive power and reliability of the AI was impressive and easy to implement within a patient’s routine care. The calculations were happening as the patients were being scanned, and the results were immediately delivered to doctors. As poor blood flow is treatable, these better predictions ultimately lead to better patient care, as well as giving us new insights into how the heart works.”

Dr Peter Kellman from the National Institutes of Health (NIH) in the US, who working with Dr Hui Xue at the NIH, developed the automated AI techniques to analyse the images that were used in the study, said: “This study demonstrates the growing potential of artificial intelligence-assisted imaging technology to improve the detection of heart disease and may move clinicians closer to a precision medicine approach to optimize patient care. We hope that this imaging approach can save lives in the future.”

Clearview AI

When London’s Metropolitan Police Department announced its decision to adopt the controversial and intrusive ClearView AI surveillance system at the end of January, a global cacophony of protest erupted. Concerns, fear and trepidation surrounding facial recognition technologies, especially those like Clearview which can ID people in real-time, have been simmering for decades, but the Met’s decision has finally caused public outrage to boil over. But how did we even get to the point where a relatively unknown startup managed to enact one of tentpoles of futuristic dystopia and begin marketing it to aspiring dictatorial regimes, all while earning the wrath of national governments and tech industry titans alike?

Clearview AI was founded in 2017 by Richard Schwartz and now-CEO Hoan Ton-That. The company counts Peter Thiel and AngelList founder Naval Ravikant among its investors. Clearview’s technology is actually quite simple: Its facial recognition algorithm compares the image of a person’s face from security camera footage to an existing database of potential matches. Marketed primarily to law enforcement agencies, the Clearview app allows users to take and upload a picture of a person then view all of the public images of that person as well as links to where those photos were published. Basically, if you’re caught on camera anywhere in public, local law enforcement can use that image to mine your entire online presence for information about you, effectively ending any semblance of personal privacy.

However, the technology itself isn’t an issue, it’s how the company acquired its 3 billion-image database: Clearview scraped images from our collective social media profiles. Until it got caught, the company reportedly lifted pictures from Twitter, Facebook, Venmo and millions of other websites over the past few years. Twitter recently sent a cease-and-desist letter to Clearview after the company’s actions were revealed, claiming that the company’s actions violated Twitter’s policies and demanding that Clearview stop lifting images from its platform immediately.

Google and YouTube made similar claims in their cease-and-desist letter. “YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease-and-desist letter,” YouTube spokesperson Alex Joseph said in a February statement to CBS News.

Facebook and Venmo sent a C&D as well, though as Slate points out, Peter Thiel currently sits on Facebook’s board, but invested $200,000 in the surveillance startup regardless.

These threats of legal consequences don’t appear to have made much of an impression on Clearview CEO, Hoan Ton-That. In a recent CBS interview, Ton-That argued that Clearview has a First Amendment right to scrape people’s online data: “The way we have built our system is to only take publicly available information and index it that way,” he said. “You have to remember that this is only used for investigations after the fact. This is not a 24/7 surveillance system.”

Advantages of AI

Amid the cacophony of concern over artificial intelligence (AI) taking over jobs (and the world) and cheers for what it can do to increase productivity and profits, the potential for AI to do good can be overlooked. Technology leaders such as Microsoft, IBM, Huawei and Google have entire sections of their business focused on the topic and dedicate resources to build AI solutions for good and to support developers who do. In the fight to solve extraordinarily difficult challenges, humans can use all the help we can get. Here are 8 powerful examples of artificial intelligence for good as it is applied to some of the toughest challenges facing society today.

There are more than 1 billion people living with a disability around the world. Artificial intelligence can be used to amplify these people’s abilities to improve their accessibility. It can facilitate employment, improve daily life and help people living with disabilities communicate. From opening up the world of books to deaf children to narrating what it “sees” to those with visual impairments, apps and tools powered by artificial intelligence are improving accessibility.

Climate Change, Conservation and the Environment

One of the most perplexing and pressing issues the planet faces today is climate change. Artificial intelligence innovators are developing ways for the technology to be applied to help improve the climate change issue from simulations to monitoring, measuring and resource management. In addition, AI has been deployed in conservation biology. AI tools make wildlife monitoring more accurate and efficient, and data analysis streamlined. Drones are also used to monitor wildlife populations and count animals as well as catch poachers in the act.

World Hunger

In order to feed the world’s population by 2050, the United Nations estimates we will need to increase the world’s food production by 70%. This gargantuan task seems more plausible with the support of artificial intelligence. In addition to developing hearty seeds, artificial intelligence can be used to automate tedious tasks, detect disease for earlier interventions, apply herbicide precisely and generally maximize crop production.

Avoid Another AI Winter

Although there has been great progress in artificial intelligence (AI) over the past few years, many of us remember the AI winter in the 1990s, which resulted from overinflated promises by developers and unnaturally high expectations from end users. Now, industry insiders, such as Facebook head of AI Jerome Pesenti, are predicting that AI will soon hit another wall—this time due to the lack of semantic understanding.

“Deep learning and current AI, if you are really honest, has a lot of limitations,” said Pesenti. “We are very, very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding.”

Other computer scientists believe that AI is currently facing a “reproducibility crisis” because many complex machine-learning algorithms are a “black box” and cannot be easily reproduced. Joelle Pineau, a computer science professor at McGill, points out that replicating and explaining how AI models work provides transparency that aids future technology innovation and research efforts and also becomes critical when algorithms replace human decision-making for things like deciding who stays in jail and who is approved for a mortgage.

Let’s take a look at what can be done to avoid another AI winter.

Start With Symbolic AI

The inability to explain and reproduce AI models is a hurdle we need to cross in order for AI to be both trusted and practical. This can be accomplished by taking a step back in time and looking at symbolic AI again and then taking two steps forward by combining symbolic AI (classic knowledge representations, rule-based systems, reasoning, graph search) with machine learning techniques.

Symbolic AI adds meaning or semantics to data through the use of ontologies and taxonomies. Rule-based systems are a major technology in symbolic AI. These systems also heavily rely on these ontologies and taxonomies as they help with formulating correct and meaningful if/then rules. The advantage of using rules and rule-based systems is that they provide consistent and repeatable results but also greatly help with getting explainable results.

Eliminate Data Silos

For AI to deliver on current expectations, it also requires eliminating silos to query across IT systems, issuing sophisticated aggregate queries and automating schema and data validation measures for accurate analytics results.

The rigors for assembling diverse, annotated training datasets for machine learning models mandates the ability to query across databases or swiftly integrate disparate sources for this purpose. Semantic graph databases support this prerequisite for statistical AI with a standards-based approach in which each node and edge of the graph has a unique, machine-readable global identifier.

Thus, organizations can link together different databases to query across them while incorporating a range of sources for common use cases, such as predicting an individual’s next health issue or just-in-time supply chain management.

These federated queries not only make silo culture obsolete, but also ensure that data always remain relevant and future-proof against any upcoming technologies. In an age in which AI and analytics have become increasingly necessary for real-time action, organizations simply won’t have time to rebuild the schema and nomenclature between silo databases.