

Microhard?
Mastodon: @[email protected]
Microhard?
except genAI has proven no purpose
Generative AI has spawned an awful amount of AI slop and companies are forcing incomplete products on users. But don’t judge the technology by shitty implementations. There are loads of use cases where when used correctly, generative AI brings value. For example, in document discovery in legal proceedings.
100% and like any tool, it can be used poorly resulting in AI bit rot, bugs, unmaintainable code, etc. But when used well, given appropriate context, by users that know what good solutions looks like, it can increase developer efficiency.
The author seems to think that OpenAI having an unsustainable business model means generative AI is a con. Generative AI doesn’t mean OpenAI 🤦♂️ There is a good chance that the VC funds invested in OpenAI will have evaporated in 5 years. But generative AI will exist in 5 years, it will be orders of magnitude more useful, and it will help solve many problems.
Were you the 18 or the 43?
I grew up coding but don’t read sci-fi. You need to expand your idea on what a good motived coder looks like. It sounds like you’re in a bubble.
two to power of four = 16
I think twatgate would be more appropriate
It sounds like you are trying to start a pyramid scheme 😅
Plus you can give out really stupid email addresses that work
182cm and satisfied though I wouldn’t mind playing around with different heights for a day. Like being 10cm tall so I could play with my daughter as one of her toys. Or being 10m tall so I could clean my gutters.
It is impossible to accurately quantify how many deaths and human years lost can be directly attributed to Brian Thompson. UnitedHealthcare themselves may have some data but there are also indirect deaths and shortened life spans as a result of denied or delayed treatment.
That’s my point, if the model returns a hallucinated source you can probably disregard it’s output. But if the model provides an accurate source you can verify it’s output. Depending on the information you’re researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.
Generative AI is a tool, sometimes is useful, sometimes it’s not useful. If you want a recipe for pancakes you’ll get there a lot quicker using ChatGPT than using Google. It’s also worth noting that you can ask tools like ChatGPT for it’s references.
I guess this guy doesn’t like to get misgendered
I know the difference. Neither OpenAI, Google, or Anthropic have admitted they can’t scale up their chat bots. That statement is not true.
Focus on the wisdom instead of the semantics