• 16 Posts
  • 97 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • For me it was playing Life is Strange for the first time. I bought it because it had been listed on Steam as “Overwhelmingly Positive” for ages, and at the time I was really enjoying the story-based games that companies like Telltale were producing. So, knowing nothing about the game, I picked it up and started playing it.

    The first act was slow. What I didn’t realize at the time was that the writers were establishing Arcadia Bay, a city in the Pacific Northwest, as a character. All the people in it needed to be recognizable, so it took time for them to teach the player about who they were, what mattered to them, how they fit in to the city, and what their flaws were. I actually stopped playing for a while after the first act. But, luckily, I picked it back up over the holiday season.

    I still remember playing it in my living room. I was so thoroughly absorbed into the story that when something tense happened in the second act and I couldn’t stop it the way I normally could, I was literally crushing the controller as if I could make things work by pulling the triggers harder.

    I am decidedly not the demographic that Life is Strange was written to appeal to, but they did such a good job writing a compelling story that it didn’t matter. I got sucked in, the characters became important to me, and I could not. put. it. down. I played straight through a night until I finished it.

    (If you’ve played it and you’re wondering, I chose the town the first time I played it.)

    I’ll never forget that game. I’ll also never forget the communities that spawned around it. I read the accounts of people who had just played it for the first time for about a year because it helped me relive the experience I had when I played it. It was incredible.









  • I’ve just spent a few weeks continually enhancing a script in a language I’m not all that familiar with, exclusively using ChatGPT 4. The experience leaves a LOT to be desired.

    The first few prompts are nothing short of amazing. You go from blank page to something that mostly works in a few seconds. Inevitably, though, something needs to change. That’s where things start to go awry.

    You’ll get a few changes in, and things will be going well. Then you’ll ask for another change, and the resulting code will eliminate one of your earlier changes. For example, I asked ChatGPT to write a quick python script that does fuzzy matching. I wanted to feed it a list of filenames from a file and have it find the closest match on my hard drive. I asked for a progress bar, which it added. By the time I was done having it generate code, the progress bar had been removed a couple of times, and changed out for a different progress bar at least three times. (On the bright side, I now know of multiple progress bar solutions in Python!)

    If you continue on long enough, the “memory” of ChatGPT isn’t sufficient to remember everything you’ve been doing. You get to a point where you need to feed it your script very frequently to give it the context it needs to answer a question or implement a change.

    And on top of all that, it doesn’t often implement the best change. In one instance, I wanted it to write a function that would parse a CSV, count up duplicate values in a particular field, and add that value to each row of the CSV. I could tell right away that the first solution was not an efficient way to accomplish the task. I had to question ChatGPT in another prompt about whether it was efficient. (I was soundly impressed that it recognized the problem after I brought it up and gave me something that ended up being quite fast and efficient.)

    Moral of the story: you can’t do this effectively without an understanding of computer science.










  • Regardless of whether or not any of the titles do or do not contain said content, ChatGPT’s varying responses highlight troubling deficiencies of accuracy, analysis, and consistency. A repeat inquiry regarding The Kite Runner, for example, gives contradictory answers. In one response, ChatGPT deems Khaled Hosseini’s novel to contain “little to no explicit sexual content.” Upon a separate follow-up, the LLM affirms the book “does contain a description of a sexual assault.”

    On the one hand, the possibility that ChatGPT will hallucinate that an appropriate book is inappropriate is a big problem. But on the other hand, making high-profile mistakes like this keeps the practice in the news and keeps showing how bad it is to ban books, so maybe it has a silver lining.