Day: February 14, 2020

AI recent strategy

We are already seeing governments essentially outsource their roles as regulators, leaving matters to self-regulation on an increasing scale. European governments, for example, after creating the right to be forgotten on search engines that pre-dated GDPR, left the task of enforcing this right to search engines themselves. The reason? Governments lacked the technological competence, resources, and political coherence to do so themselves.

regulatory market is a new solution to the problem of the limited capacity of traditional regulatory agencies, invented for the nation-state manufacturing age, to keep up with the global digital age.

It combines the incentives that markets create to invent more effective and less burdensome ways to provide a service with hard government oversight to ensure that whatever the regulatory market produces, it satisfies the goals and targets set by democratic governments.

So, instead of governments writing detailed rules, governments instead set the goals: What accident rates are acceptable in self-driving cars? What amount of leakage from a confidential data set is too much? What factors must be excluded from an algorithmic decision?

Then, instead of tech companies deciding for themselves how they will meet those goals, the job is taken on by independent companies that move into the regulatory space, incentivized to invent streamlined ways to achieve government-set goals.

This might involve doing big data analysis to identify the real risk factors for accidents in self-driving cars, using machine learning to detect money laundering transactions more effectively than current methods, or building apps that detect when another app is violating its own privacy policies.

Independent private regulators would compete to provide the regulatory services tech companies are required by government to purchase.

How does this not become a race to the bottom, with private regulators trying to outbid each other to be as lenient as possible, the way that continued self-regulation might?

The answer is for governments to shift their oversight to regulating the regulators. A private regulator would require a license to compete, and could only get and maintain that license if it continues to demonstrate it is achieving the required goals.

The wisdom of the approach rests on this hard government oversight; private regulators have to fear losing their license if they cheat, get hijacked by the tech companies they regulate, or simply do a bad job.

The failure of government oversight is, of course, the challenge that got us here in the first place, as in the case of Boeing self-regulating safety standards on the ill-fated 737 Max.

But the government oversight challenge in a regulatory market will often be easier to solve than in the traditional setting by having fewer regulators than tech companies, an incentive for regulators to maintain their license, and industry-wide data.

And because regulators could operate on a global scale, seeking licenses from multiple governments, they would be less likely to respond to the interests of a handful of domestic companies when they are at risk of losing their ability to operate around the world.

This approach may not solve every regulatory challenge or be appropriate in every context. But it could transform AI regulation into a more manageable and transparent problem.

HUMANS ARE KEY TO COMBATING BIAS IN AI

Artificial intelligence (AI) was once the stuff of science fiction. Today, however, it is woven into our everyday experiences in the form of chatbots, voice assistants and even Google Maps. In fact, according to Statista, 84% of global business organisations now believe that AI will give them a competitive advantage.

AI may be fairly commonplace (at least at a simplistic level), but developing it to maturity is proving more elusive. Training a machine to learn, respond and act like a human takes massive amounts of data inputs across countless scenarios.

Managing this process alone is tough for organisations as they face many potential issues. The most common, and potentially the most dangerous, is the issue of biased data. If an organisation plans to excel with AI, combating this bias should be its number one priority. Otherwise, the company risks the algorithm delivering inaccurate results and potentially alienating large portions of customers.

The first step to tackling this problem is to understand how algorithms become biased in the first place. Every developer (and every person, for that matter) has conscious and unconscious biases that feed into their initial development, and because an algorithm is only as smart as the data used to train it, this can set a dangerous precedent. Bad data has the potential to cause biased AI to make decisions that actively harm people and populations. But while humans are the root of all bias, they are also the key to removing it.

Today’s consumers want AI to be more natural and more human, but to achieve this the data that goes into the algorithms must be more representative of the real world.

Collecting diversified training data at scale from real people is the way to do this. Using a vetted global community that covers numerous countries, ages, genders, races, cultures, political affiliations, ideologies, socioeconomic and education levels, and more, allows organisations to validate that their algorithms are producing accurate, human-like and truly useful results. This applies to sourcing the baseline sets of training data, and to the ongoing collection of data, so it is advisable to introduce a structure which allows for continual feedback and modification.

It may be that some users report difficulties with certain aspects of the product, for example voice or facial recognition, and this feedback could then be incorporated into the next version of the algorithm to improve it for future users.

The reality is that despite limitless technical implementations, AI can only ever be as good as the humans who programme it. This raises considerable issues when we factor in all of the intentional and unintentional biases that each person carries. To some extent bias will always exist within artificial intelligence, but by collecting real human interactions before release, businesses can train their algorithm and achieve results that provide real value to their customers.

We are reaching a point where AI has begun to influence the decisions which govern the individual and collective future of our society, so it is vital that the companies developing these algorithms take an active role in making AI more reflective of society, and fairer for all.

AI helping medicine

Artificial intelligence has been used for the first time to instantly and accurately measure blood flow, in a study led by UCL and Barts Health NHS Trust.

The results were found to be able to predict chances of death, heart attack and stroke, and can be used by doctors to help recommend treatments which could improve a patient’s blood flow.

Heart disease is the leading global cause of death and illness. Reduced blood flow, which is often treatable, is a common symptom of many heart conditions. International guidelines therefore recommend a number of assessments to measure a patient’s blood flow, but many are invasive and carry a risk.

Non-invasive blood flow assessments are available, including Cardiovascular Magnetic Resonance (CMR) imaging, but up until now, the scan images have been incredibly difficult to analyse in a manner precise enough to deliver a prognosis or recommend treatment.

In the largest study of its kind, funded by British Heart Foundation and published in the journal Circulation, researchers took routine CMR scans from more than 1,000 patients attending St Bartholomew’s Hospital and the Royal Free Hospital and used a new automated artificial intelligence technique to analyse the images. By doing this, the teams were able to precisely and instantaneously quantify the blood flow to the heart muscle and deliver the measurements to the medical teams treating the patients.

By comparing the AI-generated blood flow results with the health outcomes of each patient, the team found that the patients with reduced blood flow were more likely to have adverse health outcomes including death, heart attack, stroke and heart failure.

The AI technique was therefore shown for the first time to be able to predict which patients might die or suffer major adverse events, better than a doctor could on their own with traditional approaches.

Professor James Moon (UCL Institute of Cardiovascular Science and Barts Health NHS Trust) said: “Artificial intelligence is moving out of the computer labs and into the real world of healthcare, carrying out some tasks better than doctors could do alone. We have tried to measure blood flow manually before, but it is tedious and time-consuming, taking doctors away from where they are needed most, with their patients.”

Dr Kristopher Knott (UCL Institute of Cardiovascular Science and Barts Health NHS Trust) added: “The predictive power and reliability of the AI was impressive and easy to implement within a patient’s routine care. The calculations were happening as the patients were being scanned, and the results were immediately delivered to doctors. As poor blood flow is treatable, these better predictions ultimately lead to better patient care, as well as giving us new insights into how the heart works.”

Dr Peter Kellman from the National Institutes of Health (NIH) in the US, who working with Dr Hui Xue at the NIH, developed the automated AI techniques to analyse the images that were used in the study, said: “This study demonstrates the growing potential of artificial intelligence-assisted imaging technology to improve the detection of heart disease and may move clinicians closer to a precision medicine approach to optimize patient care. We hope that this imaging approach can save lives in the future.”

Clearview AI

When London’s Metropolitan Police Department announced its decision to adopt the controversial and intrusive ClearView AI surveillance system at the end of January, a global cacophony of protest erupted. Concerns, fear and trepidation surrounding facial recognition technologies, especially those like Clearview which can ID people in real-time, have been simmering for decades, but the Met’s decision has finally caused public outrage to boil over. But how did we even get to the point where a relatively unknown startup managed to enact one of tentpoles of futuristic dystopia and begin marketing it to aspiring dictatorial regimes, all while earning the wrath of national governments and tech industry titans alike?

Clearview AI was founded in 2017 by Richard Schwartz and now-CEO Hoan Ton-That. The company counts Peter Thiel and AngelList founder Naval Ravikant among its investors. Clearview’s technology is actually quite simple: Its facial recognition algorithm compares the image of a person’s face from security camera footage to an existing database of potential matches. Marketed primarily to law enforcement agencies, the Clearview app allows users to take and upload a picture of a person then view all of the public images of that person as well as links to where those photos were published. Basically, if you’re caught on camera anywhere in public, local law enforcement can use that image to mine your entire online presence for information about you, effectively ending any semblance of personal privacy.

However, the technology itself isn’t an issue, it’s how the company acquired its 3 billion-image database: Clearview scraped images from our collective social media profiles. Until it got caught, the company reportedly lifted pictures from Twitter, Facebook, Venmo and millions of other websites over the past few years. Twitter recently sent a cease-and-desist letter to Clearview after the company’s actions were revealed, claiming that the company’s actions violated Twitter’s policies and demanding that Clearview stop lifting images from its platform immediately.

Google and YouTube made similar claims in their cease-and-desist letter. “YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease-and-desist letter,” YouTube spokesperson Alex Joseph said in a February statement to CBS News.

Facebook and Venmo sent a C&D as well, though as Slate points out, Peter Thiel currently sits on Facebook’s board, but invested $200,000 in the surveillance startup regardless.

These threats of legal consequences don’t appear to have made much of an impression on Clearview CEO, Hoan Ton-That. In a recent CBS interview, Ton-That argued that Clearview has a First Amendment right to scrape people’s online data: “The way we have built our system is to only take publicly available information and index it that way,” he said. “You have to remember that this is only used for investigations after the fact. This is not a 24/7 surveillance system.”

Europe to set out AI rules next week

The European Commission will next week release its long-awaited plan on how to proceed toward laws for artificial intelligence (AI) that ensure the technology is developed and used in an ethical way.

The latest leaked proposal suggests a few options that the Commission is still considering for regulating the use of AI, including a voluntary labelling framework for developers and mandatory risk-based requirements for “high-risk” applications in sectors such as health care, policing, or transport.

However, an earlier proposal to introduce a three-to-five-year moratorium on the use of facial recognition technologies has vanished, suggesting the Commission won’t proceed with this idea.

The bloc’s executive arm is expected to propose updating existing EU safety and liability rules to address new AI risks.

“Given how fast AI evolves, the regulatory framework must leave room for further developments,” the draft says.

AI requires a “European governance structure”, the paper says, potentially replicating the model of the EU’s network of national data protection authorities.

EU governments are beginning to move forward on AI, “risking a patchwork of rules” throughout the continent, the draft says. Denmark, for example, has launched a data ethics seal. Malta has introduced a certification system for AI.

Following the release of the Commission white paper next week, the EU will spend months collecting feedback from industry, researchers, civil society and governments. Hard laws are expected to be written up in the autumn.

High risk AI

The Commission’s thinking on AI – ordered by new President Ursula von der Leyen as one of the initiatives she wants to launch in her first 100 days in office – is part of a global debate about these new technologies. Several researchers have been sounding the alarm that AI, unregulated, could undermine data privacy, allow rampant online hacking and financial fraud, lead to wrong medical diagnoses or biased decisions on lending and insurance. Last year, leaders of the 20 largest nations agreed to a set of broad ethical principles for AI, but haven’t yet gotten into the kind of specific ideas being discussed by the Commission.

According to the Commission’s draft paper, the challenge in pinning rules onto AI is that many of the decisions made by algorithms will in the future be illegible to humans – “even the developers may not know why a certain decision is reached”. This has become known as AI’s “black box” decision making.

EU laws should anyway differentiate between “high-risk” and “low-risk” AI, with high-risk applications tested before they come into every day use.

It will be necessary to set “appropriate requirements” on any data fed to AI algorithms, in order to ensure “traceability and compliance”, the paper says.

AI algorithms should be trained on data in Europe, “if there is no way to determine the way data has been gathered.”

Responsibility for AI applications should be shared between “developer and deployer”. Accurate records on data collection will need to be maintained.

Advantages of AI

Amid the cacophony of concern over artificial intelligence (AI) taking over jobs (and the world) and cheers for what it can do to increase productivity and profits, the potential for AI to do good can be overlooked. Technology leaders such as Microsoft, IBM, Huawei and Google have entire sections of their business focused on the topic and dedicate resources to build AI solutions for good and to support developers who do. In the fight to solve extraordinarily difficult challenges, humans can use all the help we can get. Here are 8 powerful examples of artificial intelligence for good as it is applied to some of the toughest challenges facing society today.

There are more than 1 billion people living with a disability around the world. Artificial intelligence can be used to amplify these people’s abilities to improve their accessibility. It can facilitate employment, improve daily life and help people living with disabilities communicate. From opening up the world of books to deaf children to narrating what it “sees” to those with visual impairments, apps and tools powered by artificial intelligence are improving accessibility.

Climate Change, Conservation and the Environment

One of the most perplexing and pressing issues the planet faces today is climate change. Artificial intelligence innovators are developing ways for the technology to be applied to help improve the climate change issue from simulations to monitoring, measuring and resource management. In addition, AI has been deployed in conservation biology. AI tools make wildlife monitoring more accurate and efficient, and data analysis streamlined. Drones are also used to monitor wildlife populations and count animals as well as catch poachers in the act.

World Hunger

In order to feed the world’s population by 2050, the United Nations estimates we will need to increase the world’s food production by 70%. This gargantuan task seems more plausible with the support of artificial intelligence. In addition to developing hearty seeds, artificial intelligence can be used to automate tedious tasks, detect disease for earlier interventions, apply herbicide precisely and generally maximize crop production.

Avoid Another AI Winter

Although there has been great progress in artificial intelligence (AI) over the past few years, many of us remember the AI winter in the 1990s, which resulted from overinflated promises by developers and unnaturally high expectations from end users. Now, industry insiders, such as Facebook head of AI Jerome Pesenti, are predicting that AI will soon hit another wall—this time due to the lack of semantic understanding.

“Deep learning and current AI, if you are really honest, has a lot of limitations,” said Pesenti. “We are very, very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding.”

Other computer scientists believe that AI is currently facing a “reproducibility crisis” because many complex machine-learning algorithms are a “black box” and cannot be easily reproduced. Joelle Pineau, a computer science professor at McGill, points out that replicating and explaining how AI models work provides transparency that aids future technology innovation and research efforts and also becomes critical when algorithms replace human decision-making for things like deciding who stays in jail and who is approved for a mortgage.

Let’s take a look at what can be done to avoid another AI winter.

Start With Symbolic AI

The inability to explain and reproduce AI models is a hurdle we need to cross in order for AI to be both trusted and practical. This can be accomplished by taking a step back in time and looking at symbolic AI again and then taking two steps forward by combining symbolic AI (classic knowledge representations, rule-based systems, reasoning, graph search) with machine learning techniques.

Symbolic AI adds meaning or semantics to data through the use of ontologies and taxonomies. Rule-based systems are a major technology in symbolic AI. These systems also heavily rely on these ontologies and taxonomies as they help with formulating correct and meaningful if/then rules. The advantage of using rules and rule-based systems is that they provide consistent and repeatable results but also greatly help with getting explainable results.

Eliminate Data Silos

For AI to deliver on current expectations, it also requires eliminating silos to query across IT systems, issuing sophisticated aggregate queries and automating schema and data validation measures for accurate analytics results.

The rigors for assembling diverse, annotated training datasets for machine learning models mandates the ability to query across databases or swiftly integrate disparate sources for this purpose. Semantic graph databases support this prerequisite for statistical AI with a standards-based approach in which each node and edge of the graph has a unique, machine-readable global identifier.

Thus, organizations can link together different databases to query across them while incorporating a range of sources for common use cases, such as predicting an individual’s next health issue or just-in-time supply chain management.

These federated queries not only make silo culture obsolete, but also ensure that data always remain relevant and future-proof against any upcoming technologies. In an age in which AI and analytics have become increasingly necessary for real-time action, organizations simply won’t have time to rebuild the schema and nomenclature between silo databases.

Xiaomi spins off POCO

Xiaomi  said today it is spinning off POCO, a sub-smartphone brand it created in 2018, as a standalone company that will now run independently of the Chinese electronics giant and make its own market strategy.

The move comes months after a top POCO executive — Jai Mani, a former Googler — and some other founding and core members left the sub-brand. The company today insisted that POCO F1, the only smartphone to be launched under the POCO brand, remains a “successful” handset. The POCO F1, a $300 smartphone, was launched in 50 markets.

Manu Kumar Jain, VP of Xiaomi, said POCO had grown into its own identity in a short span of time. “POCO F1 is an extremely popular phone across user groups, and remains a top contender in its category even in 2020. We feel the time is right to let POCO operate on its own now, which is why we’re excited to announce that POCO will spin off as an independent brand,” he said in a statement.

A Xiaomi spokesperson confirmed to TechCrunch that POCO is now an independent company, but did not share how it would be structured.

Xiaomi created the POCO brand to launch high-end, premium smartphones that would compete directly with flagship smartphones of OnePlus and Samsung. In an interview with yours truly in 2018, Alvin Tse, the head of POCO, and Mani, said that they were working on a number of smartphones and were also thinking about other gadget categories.

At the time, the company had 300 people working on POCO, and they “shared resources” with the parent firm.

“The hope is that we can open up this new consumer need …. If we can offer them something compelling enough at a price point that they have never imagined before, suddenly a lot of people will show interest in availing the top technologies,” Tse said in that interview.

It is unclear, however, why Xiaomi never launched more smartphones under the POCO brand — despite the claimed success.

In the years since, Xiaomi, which is known to produce low-end and mid-range smartphones, itself launched a number of high-end smartphones, such as the K20 Pro. Indeed, earlier this week, Xiaomi announced it was planning to launch a number of premium smartphones in India, its most important market and where it is the top handset vendor.

“These launches will be across categories which we think will help ‘Mi’ maintain consumer interest in 2020. We also intend to bring the premium smartphones from the Mi line-up, which has recorded a substantial interest since we entered the market,” said Raghu Reddy, head of Categories at Xiaomi India, in a statement.

That sounds like an explanation. As my colleague Rita pointed out last year, Chinese smartphone makers have launched sub-brands in recent years to launch handsets that deviate from their company’s brand image. Xiaomi needed POCO because its Mi and Redmi smartphone brands are known for their mid-range and low-tier smartphones. But when the company itself begins to launch premium smartphones — and gain traction — the sub-brand might not be the best marketing tool.

Tarun Pathak, a senior analyst at research firm Counterpoint, told TechCrunch that the move would allow the Mi brand to flourish in the premium smartphone tier as the company begins to seriously look at 5G adoption.

“POCO can continue to make flagship-class devices, but at lower price points and 4G connectivity. 5G as a strategy requires a premium series which has consistent message across geographies…and Mi makes that cut in a more efficient way than POCO,” he said.

Why your next TV needs ‘filmmaker mode’

TVs this year will ship with a new feature called “filmmaker mode,” but unlike the last dozen things the display industry has tried to foist on consumers, this one actually matters. It doesn’t magically turn your living room into a movie theater, but it’s an important step in that direction.

This new setting arose out of concerns among filmmakers (hence the name) that users were getting a sub-par viewing experience of the media that creators had so painstakingly composed.

The average TV these days is actually quite a quality piece of kit compared to a few years back. But few ever leave their default settings. This was beginning to be a problem, explained LG’s director of special projects, Neil Robinson, who helped define the filmmaker mode specification and execute it on the company’s displays.

“When people take TVs out of the box, they play with the settings for maybe five minutes, if you’re lucky,” he said. “So filmmakers wanted a way to drive awareness that you should have the settings configured in this particular way.”

While very few people really need to tweak the gamma or adjust individual color levels, there are a couple settings that are absolutely crucial for a film or show to look the way it’s intended. The most important are ones that fit under the general term “motion processing.”

These settings have a variety of fancy-sounding names, like “game mode,” “motion smoothing,” “truemotion,” and such like, and they are on by default on many TVs. What they do differs from model to model, but it amounts to taking content at, say, 24 frames per second, and converting it to content at, say, 120 frames per second.

Generally this means inventing the images that come between the 24 actual frames — so if a person’s hand is at point A in one frame of a movie and point C in the next, motion processing will create a point B to go in between — or B, X, Y, Z, and dozens more if necessary.

This is bad for several reasons:

First, it produces a smoothness of motion that lies somewhere between real life and film, giving an uncanny look to motion-processed imagery that people often say reminds them of bad daytime TV shot on video — which is why people call it the “soap opera effect.”

Second, some of these algorithms are better than others, and some media is more compatible than the rest (sports broadcasts, for instance). While at best they produce the soap opera effect, at worst they can produce weird visual artifacts that can distract even the least sensitive viewer.

And third, it’s an aesthetic affront to the creators of the content, who usually crafted it very deliberately, choosing this shot, this frame rate, this shutter speed, this take, this movement, and so on with purpose and a careful eye. It’s one thing if your TV has the colors a little too warm or the shadows overbright — quite another to create new frames entirely with dubious effect.

So filmmakers, and in particular cinematographers, whose work crafting the look of the movie is most affected by these settings, began petitioning TV companies to either turn motion processing off by default or create some kind of easily accessible method for users to disable it themselves.

Ironically, the option already existed on some displays. “Many manufacturers already had something like this,” said Robinson. But with different names, different locations within the settings, and different exact effects, no user could really be sure what these various modes actually did. LG’s was “Technicolor Expert Mode.” Does that sound like something the average consumer would be inclined to turn on? I like messing with settings, and I’d probably keep away from it.

So the movement was more about standardization than reinvention. With a single name, icon, and prominent placement instead of being buried in a sub-menu somewhere, this is something people may actually see and use.

Not that there was no back-and-forth on the specification itself. For one thing, filmmaker mode also lowers the peak brightness of the TV to a relatively dark 100 nits — at a time when high brightness, daylight visibility, and contrast ratio are specs manufacturers want to show off.

The reason for this is, very simply, to make people turn off the lights.

There’s very little anyone in the production of a movie can do to control your living room setup or how you actually watch the film. But restricting your TV to certain levels of brightness does have the effect of making people want to dim the lights and sit right in front. Do you want to watch movies in broad daylight, with the shadows pumped up so bright they look grey? Feel free, but don’t imagine that’s what the creators consider ideal conditions.

Technology is anthropology

The interesting thing about the technology business is that, most of the time, it’s not the technology that matters. What matters is how people react to it, and what new social norms they form. This is especially true in today’s era, well past the midpoint of the deployment age of smartphones and the internet.

People — smart, thoughtful people, with relevant backgrounds and domain knowledge — thought that Airbnb  and Uber  were doomed to failure, because obviously no one would want to stay in a stranger’s home or ride in a stranger’s car. People thought the iPhone would flop, because users would “detest the touch screen interface.” People thought enterprise software-as-a-service would never fly, because executives would insist on keeping servers in-house at all costs.

These people were so, so, so wrong; but note that they weren’t wrong about the technology. (Nobody really argued about the technology.) Instead they were dead wrong about other people, and how their own society and culture would respond to this new stimulus. They were anthropologically incorrect.

This, of course, is why every major VC firm, and every large tech company, keeps a crack team of elite anthropologists busy at all times, with big budgets and carte blanche, reporting directly to the leadership team, right? (Looks around.) Oh. Instead they’re doing focus groups and user interviews, asking people in deeply artificial settings to project their usage of an alien technology in an unknown context, and calling that their anthropological, I’m sorry, their market research? Oh.

I kid, I kid. Sort of, at least, in that I’m not sure a crack team of elite anthropologists would be all that much more effective. It’s hard enough getting an accurate answer of how a person would use a new technology when that’s the only variable. When they live in a constantly shifting and evolving world of other new technologies, when the ones which take root and spread have a positive-feedback-loop effect on the culture and mindset toward new technologies, and when every one of your first 20 interactions with new tech changes your feelings about it … it’s basically impossible.

And so: painful trial and error, on all sides. Uber and Lyft  didn’t think people would happily ride in strangers’ cars either; that’s why Uber started as what is now Uber Black, basically limos-via-app, and Lyft used to have that painfully cringeworthy “ride in the front seat, fist-bump your driver” policy. Those are the success stories. The graveyard of companies whose anthropological guesses were too wrong to pivot to rightness, or who couldn’t / wouldn’t do so fast enough, is full to bursting with tombstones.

That’s why VCs and Y Combinator  have been much more secure businesses than startups; they get to run dozens or hundreds of anthropological experiments in parallel, while startups get to run one, maybe two, three if they’re really fast and flexible, and then they die.

This applies to enterprise businesses too, of course. Zoom was an anthropological bet that corporate cultures would make video conferencing big and successful if it actually worked. It’s easy to imagine the mood among CEOs instead being “we need in-person meetings to encourage those Moments of Serendipity,” which you’ll notice is the same argument that biased so many big companies against remote work and in favor of huge corporate campuses … an attitude that looks quaint, old-fashioned and outmoded, now.

This doesn’t just apply to the deployment phase of technologies. The irruption phase has its own anthropology. But irruption affects smaller sectors of the economy, whose participants are mostly technologists themselves, so it’s more anthropologically reasonable for techies to extrapolate from their own views and project how that society will change.

The meta-anthropological theory held by many is that what the highly technical do today, the less technical will do tomorrow. That’s a belief held throughout the tiny, wildly non-representative cryptocurrency community, for instance. But even if it was true once, is it still? Or is a shift away from that pattern to another, larger social change? I don’t know, but I can tell you how we’re going to find out: painful trial and error.