Thread
24/ YouTube when it was acquired was viewed as a liability for google because of the copyright issues with UGC. Over time YouTube became a model for the legit protection of IP. It took a decade. And YT is very different now than it was then in that regard.
25/ There’s no getting around the fact that one site can’t simply move the words around and provide a link and claim that is just “research” or “fair use” or “derivative work”. There’s 200 yrs of law and constitutional protection at work.
26/ When it comes to “disruption” a big part of my belief is that Google spent 20+ years wrestling with this topic while also balancing business needs/desires. Their caution is the result of the reality they experienced.
27/ This leads to the third big area of concern (C) and that is the notion of “responsible AI” which I believe will lead to a significant tempering of output but also a *HUGE* missed opportunity to make the world’s knowledge more accessible.
28/ “Responsible AI” is the first time a technology spawned an almost police action before it was even deployed—primarily coming about during the early days of image recognition. Imagine if we had locked down early PCs a la Trustworthy Computing, but in 1985 or the internet in… twitter.com/i/web/status/1625622653800415232
29/ The most big shot companies of the US and the CEOs have created “Policy Recommendations for Responsible Artificial Intelligence” where before any real use/deployment they already called on congress to regulate AI [sic]. s3.amazonaws.com/brt.org/Business_Roundtable_Artificial_Intelligence_Policy_Recommendations_Jan2022_1...
30/ These of course appear “good” but they cannot possibly survive the complexity of information, knowledge, scientific peer review, political parties, school boards, and also the world of what is deemed “acceptable” at any given time.
31/ Much of Reddit has been consumed trying to get Bing or OpenAI to say bad words or worse some “cancelable” offense. As it turns out this is not difficult. Worse, it is easy to stumble into those crossed with clear factual errors.
32/ The first answers to these problems will be to retreat and only say things that are “established facts” and “acceptable in today’s context”. As we know, humans are not allowed to say bad things even if they caveat them with “this is how people talked” or “I’m quoting”.
33/ The most mundane topics become off limits or “not worth the risk”. Even in a business context, this is enormously difficult. I can’t even make a complete list of all the times I dealt with spelling dictionaries, maps, clip art, and even fonts that were deemed “irresponsible”.
34/ Quite simply, the whole idea of “default responsible” when it comes to generating content based on user questions without human review of every input and output is unsolvable. There has to be room for mistakes, offenses, or worse.
35/ But how can that be with a default commitment to responsible. Worse, even if legal liability is removed, even if w/a EULA/wavier/consent box, no entity wants the endless/ongoing PR crisis of the day every time a news event causes a new wave of prompts and generated answers.
36/ This commitment from CEOs/lawyers/comms/HR is an invitation to a priori regulate AI. In many ways this is the worst position to be in—-regulating something before it has even really been invented. They asked for it to happen promising only the “responsible” output.
37/ So along with the existing legal framework needing adjusting to account for an unprecedented scale of automated “use” (fair or otherwise), the notion of “responsible AI” will need to be revisited lest it twist itself around every side of every issue.
38/ This is not “trustworthy computing” because that was binary—protect from bad people even at the expense of usability as we stated. This is a proactive agenda designed to appease a subset of customers and can’t possibly please all constituents of every issue.
39/ R AI is much more like Google’s IPO promise of “Don’t be Evil”. There was great skepticism about that at the time, and great hope. The skeptics were proven right because the world is complex and murky and unknown, not just good and evil.
40/ What does this mean then in practice. First, big companies are going to end up continuing to constrain the scenarios and “sanitize” the output. Huge swathes of content will simply not exist for fear of being “irresponsible” “bad PR” or illegal (or potentially so).
41/ Big companies will end up focusing on mundane results, especially in search, that will effectively provide a better expression of “OneBox” answers for known topics with scrubbed inputs, prompt kill-lists, hand-coded default responses, apologies, etc.
42/ Second, productivity tools and use cases for LLMs will end up focusing on much more narrow cases of mundane and repetitive work. This is work where LLMs are basically improved grammar/spelling/templates for common interactions.
43/ The biggest barrier to use a basic Word template has been just customizing it to the exact customer/use context without breaking grammar. LLMs make this easy.
44/ LLMs will be valuable to some degree for summarizing first party content, improving first party writing, or even modifying first party images using available/licensed images (eg, show this photo of our new product being used on an airplane)
45/ Unfortunately, these are not the “important” cases. These will not drive whole new layers of productivity tools. They will be great additions to existing workflows and tools, such as CS or CRM tools.
46/ Therein is the big opportunity: new tools that approach hard problems and high value prompts AND also from outset working within the developed legal framework while taking advantage of the world’s knowledge. Those have a huge advantage.

Invent. Take Risk BigCo can’t. //END
PS/ Love this example. Shows how the much lauded notion of using sources only makes a generated answer more authoritative when in fact it is not. This is a trivial compilation. Generated compilations/summaries should be noted “…according to the intern with no domain knowledge”.

PS/ as described above, Microsoft’s own “Responsible AI” efforts now being used against it with Bing’s chatbot. It was an inevitable target and this is only the start.

History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot www.nytimes.com/2023/02/23/opinion/microsoft-bing-ai-ethics.html?smid=tw-share
Mentions
See All