• 0 Posts
  • 86 Comments
Joined 1 year ago
cake
Cake day: September 22nd, 2023

help-circle



  • The universal design ideal, is that, an ideal. Hopefully we strike an appropriate balance between different people’s needs. I think recognizing that some things are more challenging for some people than others is good. There should be reasonable responsibility to anticipate other’s needs and advocacy for one’s own needs.

    I’ll give you my own example as a person with a disability. Have I been late to meetings/appointment “because of my disability”? Sure. I do plan to have enough time to make it work but that doesn’t always happens because of unforeseen issues that come up for me that would not for most others. Do I consider that the fault of the others involved? Of course not, let be practical.

    On the other hand when possible, which has been 100% of the time since March 2020, I essentially demand virtual meetings now because my computer is the most accessible environment for me in the world and put me on far more even ground as others.














  • That is pretty interesting and thanks for posting it. I hear the words and its intriguing but to be honest, I don’t really understand it. I’d have to give it some thought and read more about it. Do you have a place you suggest going to learn more?

    I use chatgpt-4o currently for learning python and helping with grammar. I find it does great with grammar but even with relatively simple python questions it can produce some “creative” answers. Like its in the ball park but its not perfect and for a learner, that’s learning the hard way. To be fair I don’t use the assistant/code interpreter, which I have no idea about but based on its name I assume it might be better. So that’s what I based my somewhat skeptical opinion of ai on.




  • From my understanding, AI is a essentially a statistical method so naturally it will use a confidence level. Its hard for me to take the leap of faith to confidence level will correlate to accuracy. Seems to me it would be more dependent on its data set. If its data contains a commonly held belief, that is incorrect, would it not have a high confidence level on an answer with that incorrect info? If we use a highly authoritative data set, that will be very limited and we’d be back to more of a keyword system than a LLM. I am sure with time, we’ll be in more of a middle ground where accuracy will be better but what will that be? 5% 3% 10%?

    I’ll freely admit I am not an expert in this at all.