Herein lie some of my thoughts and resources about neural networks. Because I work for a company that builds models for computer vision, I have a bit of a professional bias towards `image models`_, but I have tried to represent my knowledge/opinions about a broader range of subjects here. What do you think about generative "AI"? ======================================== **tl;dr** - mostly `dancing bearware`_, some novel uses in responsibility laundering .. _`dancing bearware`: http://pne.people.si.umich.edu/kellogg/033b.html Resources ========= Image models ------------ * `Stanford CS231n: Deep Learning for Computer Vision`_ - excellent introductory course in computer vision (from kNN to VGGNet) focused on neural networks, with exercises done in Python (with numpy) * `How to trick a neural network into thinking a panda is a vulture`_ - excellent exploration by Julia Evans (with Python source code) of an adversarial attack on an image classifier * `Multi-modal prompt injection image attacks against GPT-4V`_ - *"The fundamental problem here is this:* **Large Language Models are gullible** *...we need them to* **stay gullible**. *They’re useful because they follow our instructions. Trying to differentiate between “good” instructions and “bad” instructions is a very hard—currently intractable—problem."* A very similar style of attack as one against the CLIP architecture `published by OpenAI themselves`_. .. _`Stanford CS231n: Deep Learning for Computer Vision`: http://cs231n.stanford.edu/ .. _`How to trick a neural network into thinking a panda is a vulture`: https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture .. _`Multi-modal prompt injection image attacks against GPT-4V`: https://simonwillison.net/2023/Oct/14/multi-modal-prompt-injection/ .. _`published by OpenAI themselves`: https://www.theguardian.com/technology/2021/mar/08/typographic-attack-pen-paper-fool-ai-thinking-apple-ipod-clip Text models ----------- For code ```````` * `Stephen Wolfram's "What Is ChatGPT Doing … and Why Does It Work?"`_ * `0xabad1dea's GitHub CoPilot risk assessment`_ .. _`Stephen Wolfram's "What Is ChatGPT Doing … and Why Does It Work?"`: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ .. _`0xabad1dea's GitHub CoPilot risk assessment`: https://gist.github.com/0xabad1dea/be18e11beb2e12433d93475d72016902 For everything else ``````````````````` * `Washington Post`_ coverage of the data contained in the 'C4' dataset and how it influences the training of popular large models. Also allows users to check if arbitrary URLs are part of the dataset. (NOTE: C4 is '''not''' the only source of training text for the models being discussed, and the authors aren't doing a great job highlighting that, but it should still be pretty representative) * `How well does ChatGPT speak Japanese?`_ - an April 2023 evaluation of GPT-3.5 and GPT-4 performance on Japanese language assessments. Also includes an interesting comparison of the number of tokens required to represent the "Lord's Prayer" in multiple languages. I found the results of the latter particularly surprising. .. _`Washington Post`: https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/ .. _`How well does ChatGPT speak Japanese?`: https://www.passaglia.jp/gpt-japanese/ Misc. ===== * I gave `a talk`_ on the fundamentals of neural networks to Boston Python in March 2023 * 3blue1brown has an excellent `series of lessons`_ about the fundamentals of neural networks. Particularly interesting to me is the lesson on `backpropagation`_ for its excellent visualization of the process of adjusting neural network weights. .. _`a talk`: https://git.snoopj.dev/SnoopJ/talks/src/branch/master/2023/explaining_neural_networks .. _`series of lessons`: https://www.3blue1brown.com/topics/neural-networks .. _`backpropagation`: https://www.3blue1brown.com/lessons/backpropagation Dumping ground -------------- These references are totally unclassified * `"Large language models propagate race-based medicine"`_ * `"Normcore LLM Reads"`_ - a reading list * `Large Language Models Understand and Can be Enhanced by Emotional Stimuli`_ - (Note: I consider the use of "Understand" here to be unprofessional and irresponsible, but it's an interesting paper) * `The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI`_ * `AnyDream: Secretive AI Platform Broke Stripe Rules to Rake in Money from Nonconsensual Pornographic Deepfakes`_ * `ChatGPT generates fake data set to support scientific hypothesis`_ - *"In a paper published in JAMA Ophthalmology on 9 November, the authors used GPT-4… The authors instructed the large language model to fabricate data to support the conclusion that [the surgical technique] DALK [deep anterior lamellar keratoplasty] results in better outcomes than PK [penetrating keratoplasty]."* .. _`"Large language models propagate race-based medicine"`: https://www.nature.com/articles/s41746-023-00939-z .. _`"Normcore LLM Reads"`: https://gist.github.com/veekaybee/be375ab33085102f9027853128dc5f0e .. _`Large Language Models Understand and Can be Enhanced by Emotional Stimuli`: https://arxiv.org/abs/2307.11760 .. _`The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI`: https://arxiv.org/abs/2310.16787 .. _`AnyDream: Secretive AI Platform Broke Stripe Rules to Rake in Money from Nonconsensual Pornographic Deepfakes`: https://www.bellingcat.com/news/2023/11/27/anydream-secretive-ai-platform-broke-stripe-rules-to-rake-in-money-from-nonconsensual-pornographic-deepfakes/ .. _`ChatGPT generates fake data set to support scientific hypothesis`: https://www.nature.com/articles/d41586-023-03635-w Writings by others ------------------ Academic works `````````````` * `Using GitHub Copilot for Test Generation in Python: An Empirical Study`_ - *we find that 45.28% of test generated...are passing tests, containing no syntax or runtime errors. The majority (54.72%) of generated tests...are failing, broken, or empty tests. We observe that tests generated within an existing test code context often mimic existing test methods* * `Scalable Extraction of Training Data from (Production) Language Models`_ - *Using only $200 USD worth of queries to ChatGPT (gpt-3.5-turbo), we are able to extract over 10,000 unique verbatim-memorized training examples. Our extrapolation to larger budgets (see below) suggests that dedicated adversaries could extract far more data…we estimate the…memorization of ChatGPT…[at] a gigabyte of training data. In practice we expect it is likely even higher.* * `Does GPT-4 Pass the Turing Test?`_ * `"The Fallacy of AI Functionality"`_ - *"...fear of misspecified objectives, runaway feedback loops, and AI alignment presumes the existence of an industry that can get AI systems to execute on any clearly declared objectives, and that the main challenge is to choose and design an appropriate goal. Needless to say, if one thinks the danger of AI is that it will work too well, it is a necessary precondition that it works at all."* * `"Adversarial Reprogramming of Neural Networks"`_ - *"In each [of six cases], we reprogrammed the [classification] network [trained on ImageNet] to perform three different adversarial tasks: counting squares, MNIST classification, and CIFAR-10 classification… Our finding…[suggests] that the reprogramming across domains is likely [possible]."* * `"Universal and Transferable Adversarial Attacks on Aligned Language Models"`_ - *"For Harmful Behaviors, our approach achieves an attack success rate of 100% on Vicuna-7B and 88% on Llama-2-7B-Chat… we find that the adversarial examples also transfer to Pythia, Falcon, Guanaco, and surprisingly, to GPT-3.5 (87.9%) and GPT-4 (53.6%), PaLM-2 (66%), and Claude-2 (2.1%)."* * `"Mathematical Capabilities of ChatGPT"`_ - in which ChatGPT and GPT4 largely fail to muster passing performance on a mathematical problem set, compared to a domain-specific model that achieves nearly 100% performance. * `"Unmasking Clever Hans predictors and assessing what machines really learn"`_ - *"...it is important to comprehend the decision-making process itself...transparency of the what and why in a decision of a nonlinear machine becomes very effective for the essential task of judging whether the learned strategy is valid and generalizable or whether the model has based its decision on a spurious correlation in the training data"* * `"On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜"`_ - *"LMs with extremely large numbers of parameters model their training data very closely and can be prompted to output specific information from that training data"* * `"Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions"`_ - *"In total, we produce 89 different scenarios for Copilot to complete, producing 1,689 programs. Of these, we found approximately 40% to be vulnerable."* * `"Do Users Write More Insecure Code with AI Assistants?"`_ - *"We observed that participants who had access to [codex-davinci-002] were more likely to introduce security vulnerabilities for the majority of programming tasks, yet also more likely to rate their insecure answers as secure compared to those in our control group."* * `"ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models"`_ - *"Over 90% of 1008 generated jokes were the same 25 Jokes."* * `"How is ChatGPT's behavior changing over time?"`_ - *"We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time."* * `"Are Emergent Abilities of Large Language Models a Mirage?"`_ - *"For a fixed task and a fixed model family, the researcher can choose a metric to create an emergent ability or choose a metric to ablate an emergent ability. Ergo, emergent abilities may be creations of the researcher’s choices, not a fundamental property of the model family on the specific task"* * `"Extracting Training Data from Large Language Models"`_ - *"We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data...we find that larger models are more vulnerable than smaller models."* * `"Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4"`_ - *"We find that these models have memorized books, both in the public domain and in copyright, and the capacity for memorization is tied to a book’s overall popularity on the web. This differential in memorization leads to differential in performance for downstream tasks, with better performance on popular books than on those not seen on the web"* * `"Who Answers It Better? An In-Depth Analysis of ChatGPT and Stack Overflow Answers to Software Engineering Questions"`_ - *"Our user study results show that users prefer ChatGPT answers 34.82% of the time. However, 77.27% of these preferences are incorrect answers"* .. _`Using GitHub Copilot for Test Generation in Python: An Empirical Study`: https://conf.researchr.org/details/ast-2024/ast-2024-papers/2/Using-GitHub-Copilot-for-Test-Generation-in-Python-An-Empirical-Study .. _`Scalable Extraction of Training Data from (Production) Language Models`: https://arxiv.org/abs/2311.17035 .. _`Does GPT-4 Pass the Turing Test?`: https://arxiv.org/abs/2310.20216 .. _`"The Fallacy of AI Functionality"`: https://dl.acm.org/doi/10.1145/3531146.3533158 .. _`"Adversarial Reprogramming of Neural Networks"`: https://arxiv.org/pdf/1806.11146.pdf .. _`"Universal and Transferable Adversarial Attacks on Aligned Language Models"`: https://arxiv.org/abs/2307.15043 .. _`"Mathematical Capabilities of ChatGPT"`: https://arxiv.org/abs/2301.13867 .. _`"Unmasking Clever Hans predictors and assessing what machines really learn"`: https://doi.org/10.1038/s41467-019-08987-4 .. _`"On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜"`: https://doi.org/10.1145/3442188.3445922 .. _`"Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions"`: https://arxiv.org/abs/2108.09293 .. _`"Do Users Write More Insecure Code with AI Assistants?"`: https://arxiv.org/abs/2211.03622 .. _`"ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models"`: https://arxiv.org/abs/2306.04563 .. _`"How is ChatGPT's behavior changing over time?"`: https://arxiv.org/abs/2307.09009 .. _`"Are Emergent Abilities of Large Language Models a Mirage?"`: https://arxiv.org/abs/2304.15004 .. _`"Extracting Training Data from Large Language Models"`: https://arxiv.org/abs/2012.07805 .. _`"Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4"`: https://arxiv.org/abs/2305.00118 .. _`"Who Answers It Better? An In-Depth Analysis of ChatGPT and Stack Overflow Answers to Software Engineering Questions"`: https://arxiv.org/abs/2308.02312 Non-academic works `````````````````` * `tante's "Thoughts on “generative AI Art”"`_ - *"…people using these [generative] systems don’t care about the…process of creation or the thought that went into it, they care about the output and what they feel that that output gives them…It’s “idea guy” heaven."* * `Lindsey Kuper's CSE232 syllabus section on LLM usage`_ - *"Aside from the fact that the resounding hollowness of the ChatGPT-produced prose has sucked away all of my zest for life…please understand that while you are welcome to use LLM-based tools in this course, you should be aware of their limitations."* * `Time: "OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic"`_ ** The human labor that powers ChatGPT's `reinforcement learning from human feedback (RLHF)`_ * `Donald Knuth: correspondence with Stephen Wolfram`_ - *"I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same."* * `Douglas Hofstadter: "Gödel, Escher, Bach, and AI"`_ - *"I frankly am baffled by the allure, for so many unquestionably insightful people...of letting opaque computational systems perform intellectual tasks for them."* * `Ted Chiang: "ChatGPT Is a Blurry JPEG of the Web"`_ - *"Large language models identify statistical regularities in text...When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression."* * `Ted Chiang: "Will A.I. Become the New McKinsey?"`_ - *"I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism."* * `Bruce Schneier: "AI and Trust"`_ - "*the corporations controlling AI systems will take advantage of our confusion to take advantage of us…our fears of AI are basically fears of capitalism*" .. _`tante's "Thoughts on “generative AI Art”"`: https://tante.cc/2023/11/10/thoughts-on-generative-ai-art/ .. _`Lindsey Kuper's CSE232 syllabus section on LLM usage`: http://decomposition.al/CSE232-2023-09/course-overview.html#policy-on-the-use-of-llm-based-tools-like-chatgpt .. _`Time: "OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic"`: https://time.com/6247678/openai-chatgpt-kenya-workers/ .. _`reinforcement learning from human feedback (RLHF)`: https://huggingface.co/blog/rlhf .. _`Donald Knuth: correspondence with Stephen Wolfram`: https://cs.stanford.edu/~knuth/chatGPT20.txt .. _`Douglas Hofstadter: "Gödel, Escher, Bach, and AI"`: https://www.theatlantic.com/ideas/archive/2023/07/godel-escher-bach-geb-ai/674589/ .. _`Ted Chiang: "ChatGPT Is a Blurry JPEG of the Web"`: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web .. _`Ted Chiang: "Will A.I. Become the New McKinsey?"`: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey .. _`Bruce Schneier: "AI and Trust"`: https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html Lawsuits -------- The legal status of generative models and their implications for intellectual property in the US is something I'm trying to keep an eye on. The cases given below are of particular interest to me. The New York Times Company v. MICROSOFT CORPORATION ``````````````````````````````````````````````````` * December 2023 coverage: `initial complaint `__ * `Latest case proceedings `__ Andersen v. Stability AI Ltd. ````````````````````````````` * January 2023 coverage: `initial complaint `__ * `Latest case proceedings `__ Getty Images (US), Inc. v. Stability AI, Inc. ````````````````````````````````````````````` * February 2023 coverage: `initial complaint `__ * `Latest case proceedings `__ Doe 1 v. GitHub, Inc. ````````````````````` * March 2023 coverage: `defendants have motions to dismiss rejected `__ * `Latest case proceedings `__ Silverman v. OpenAI, Inc. ````````````````````````` * July 2023 coverage: `initial complaint `__ * `Latest case proceedings `__ Kadrey v. Meta Platforms, Inc. `````````````````````````````` * Similar suit to Silverman v. OpenAI, same parties etc. * Notable for a `prominent dismissal `__ of the class-action nature of the case, as the blatantly copied copyrighted works in the training data are not the works of the plaintiffs. * `Latest case proceedings `__ Authors Guild v. OpenAI Inc. ```````````````````````````` * September 2023 coverage: `initial complaint `__ * `Latest case proceedings `__ Sancton v. OpenAI Inc. et al ```````````````````````````` * November 2023 coverage: `initial complaint `__ * `Latest case proceedings `__ Mata v. Avianca, Inc. (closed) `````````````````````````````` Note: this case is **not about machine learning** textually, but is included in this list because it is a notable example of **gross misuse of a language model** by plaintiff's counsel to submit falsified documents to the court. This led to censure of plaintiff's counsel and dismissal of the case. * `June 2023 coverage: plaintiff's counsel sanctioned, case dismissed`_ * `Video commentary on the case and show-cause hearings`_ * `Case proceedings `__ .. _`June 2023 coverage: plaintiff's counsel sanctioned, case dismissed`: https://arstechnica.com/tech-policy/2023/06/lawyers-have-real-bad-day-in-court-after-citing-fake-cases-made-up-by-chatgpt/ .. _`Video commentary on the case and show-cause hearings`: https://www.youtube.com/watch?v=oqSYljRYDEM