February 23, 2024
I’m trying to get in the habit of blogging a bit more frequently so this will seem less collected than other posts.
I’ve largely avoided tinkering too much with the various generative AI tools. Mostly because I have little patience for signing up for new things and I’ve largely found my current methods of looking things up and getting things done adequate (maybe that’s my age showing). On top of that, especially for the latest OpenAI product Sora, I truly don’t understand why you would want to use this. If you’re a creative person who has a vision you want to bring to life, why wouldn’t you want to grab a camera and start?
Some use cases I can sort of understand. I’ve asked ChatGPT to help me write a function I knew how to write in Python, but in JavaScript. But now that I’m pretty comfortable in JavaScript and TypeScript I just read the Mozilla JavaScript docs or go to StackOverflow. Hopefully StackOverflow doesn’t get AI trained out of existence by these LLMs. A mentor at my company explained how he (or someone on his team, I can’t remember) used Chat GPT to build a Slack integration allowing authorized users to build an AWS EC2 instance with certain parameters for quick prototyping. But I mean, you can build AWS resources from the command line with the AWS CLI installed and you can set up a Sandbox to play around with prototypes.
The slack integration is neat. Asking ChatGPT to build you a calendar app and seeing it return something that’s not half bad is neat. And that’s where my impression of these tools ends. Most people trying to learn and practice a skill (least of all programmers), need reps as well as practice looking at references for help. Those references could be people or documents. These tools at times provide references for the answers they provide, but as a programmer I frankly don’t find what they provide that helpful or fun.
And recently, also at work, leadership acquired some licenses to use Amazon’s new code assistant and chatbot Q. No one’s been pushing developers to use Q very hard, but I can very easily see a future where we are forced or strongly encouraged to use tools like Q to boost productivity.
Now we’re programmers. Increasing productivity is kind of what we are here for. But what I don’t want is to be feeding training data 40 hours a week to a black box that’s simply trying to feed its bottom line. That’s what all these big companies who’ve developed LLMs are doing. Even OpenAI, which started as a nonprofit out to save humanity from the potentially evil AI they helped create, added a for profit arm. The nonprofit still runs the company, but we know where the money will be coming from.
All this money coming from tools that were developed by scraping data produced by countless hours of free labor (see Wikipedia, Reddit, GitHub, STACK OVERFLOW!!!). OpenAI and their ilk trained their machines on some of the broadest (and mostly accurate) collections of human knowledge ever created, and now they’re gonna sell that knowledge back to us? And help companies the world over automate not just the jobs of the developers who build these machines, but lawyers, teachers, truck drivers, whatever? Yeah no I don’t like this.
Machine learning (interdisciplinary field where these AI tools derive much of their theoretical basis) has some truly spectactular applications for simulation and modeling in fields like drug discovery and genetics. Fields where there are so many variables and interactions to track that computers supplment human cognition and ingenuity in a brilliant way. That work is great! Stealing blog posts from everyone to let your bot write some kid’s term paper is not so great!
And finally, I’d love to see a world where we’ve automated a lot of the boring work we humans don’t want to do. But I don’t think we’re going to get there with these current companies and their current structures. And they’re trying to automate fun stuff! Creative writing, making funny videos, bad poetry (trust me, I know), these things are fun! Don’t let us programmers take that from you!