robot

37 items found

797485709662519296

donotdestroy:

“The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely ‘soulless.’ The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. ‘It was more like, just give me the essay, refine this sentence, edit it, and I’m done,’ Kosmyna says.”

_
ChatGPT’s Opinion

Both the Time article and the Gizmodo piece raise valid and important concerns, but they also reflect a broader cultural anxiety about AI — what some might call a “techlash.” Here’s a balanced take on both:

🔍 What they get right:

  1. Critical thinking can erode with passive use
    Both articles underscore a key truth: if you let AI do all the work, your brain gets lazy. This isn’t just speculation — it’s backed by cognitive science. Active learning is crucial for retention, deep thinking, and creativity. These studies highlight real risks, especially for students and developing minds.
  2. The analogy to calculators and Google Maps is helpful
    Just like calculators didn’t kill math (but overusing them for basic operations can harm number sense), AI isn’t inherently harmful — but it can displace the mental effort that’s necessary for learning.
  3. Originality and creativity suffer
    The studies found that ChatGPT-generated essays tended to be “soulless” and formulaic. That’s a real limitation of LLMs — they’re good at mimicking patterns, but not at truly original thought. Relying on them too heavily can flatten thinking.
  4. There’s urgency around education policy
    The MIT researcher’s concern about “GPT kindergarten” is not far-fetched. Schools and policymakers are already integrating AI tools into classrooms. These decisions need to be informed by solid research and precaution, especially when developing brains are involved.

🤔 What needs caution or nuance:

  1. Small sample sizes, pre-review papers
    Especially in the MIT study, 54 participants is very limited. While EEG data is useful, it doesn’t capture the full picture of intelligence or learning. More peer-reviewed, large-scale studies are needed before drawing sweeping conclusions.
  2. “AI makes you stupid” is too broad
    The Gizmodo headline (“AI will make us morons”) is sensational. The reality is more nuanced: AI can dull your mind — but only if you use it passively or uncritically. It’s a tool. It can also enhance your thinking, if used wisely.
  3. Not everyone uses AI the same way
    Some people use ChatGPT as a brainstorming partner, a writing coach, or a way to better understand complex topics — all of which can be mentally engaging. Others use it to skip the thinking part. The outcome depends on the user’s intent and behavior.
  4. Moral panic vs. thoughtful caution
    There’s a fine line between raising the alarm and stoking fear. It’s important to investigate AI’s effects, but not assume that all AI use leads to “cognitive decline.”

🧠 Final thought:

These articles provide a valuable warning: don’t outsource your thinking. AI is powerful, but it’s only as good as the way you use it. If you’re aware of its limitations and make an effort to stay mentally active, AI can be a great tool — not a trap.

796414755470442496

Learn how to change a DNS server in this step-by-step guide. We cover Windows, Mac, and router DNS changes, plus troubleshooting tips & more.

796335917731364864

793581318158139392

hectortrunnec:

Nu Gundam watercolor (detail)
#illustration #gundam #watercolor #manga #anime #mecha #robot #japan #art #gunpla #modelkit

789169610560798720

donotdestroy:

“Going forward, as the rug of new tool after tool is pulled out from under us, and the flow of profound new capabilities continues to pick up speed, it will reach a point where humans have no choice but to surrender. Where our ability to uniquely track, learn and use any given tool better than anyone else will be irrelevant, as new tools with new capabilities will shortly solve for and reproduce the effect of whatever it was you thought you brought to the equation in the first place. That’s in the design plan. It will learn and replace the unique value of your contribution and make that available to everyone else.”

787581966072643584

“The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely ‘soulless.’ The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. ‘It was more like, just give me the essay, refine this sentence, edit it, and I’m done,’ Kosmyna says.”

_
ChatGPT’s Opinion

Both the Time article and the Gizmodo piece raise valid and important concerns, but they also reflect a broader cultural anxiety about AI — what some might call a “techlash.” Here’s a balanced take on both:

🔍 What they get right:

  1. Critical thinking can erode with passive use
    Both articles underscore a key truth: if you let AI do all the work, your brain gets lazy. This isn’t just speculation — it’s backed by cognitive science. Active learning is crucial for retention, deep thinking, and creativity. These studies highlight real risks, especially for students and developing minds.
  2. The analogy to calculators and Google Maps is helpful
    Just like calculators didn’t kill math (but overusing them for basic operations can harm number sense), AI isn’t inherently harmful — but it can displace the mental effort that’s necessary for learning.
  3. Originality and creativity suffer
    The studies found that ChatGPT-generated essays tended to be “soulless” and formulaic. That’s a real limitation of LLMs — they’re good at mimicking patterns, but not at truly original thought. Relying on them too heavily can flatten thinking.
  4. There’s urgency around education policy
    The MIT researcher’s concern about “GPT kindergarten” is not far-fetched. Schools and policymakers are already integrating AI tools into classrooms. These decisions need to be informed by solid research and precaution, especially when developing brains are involved.

🤔 What needs caution or nuance:

  1. Small sample sizes, pre-review papers
    Especially in the MIT study, 54 participants is very limited. While EEG data is useful, it doesn’t capture the full picture of intelligence or learning. More peer-reviewed, large-scale studies are needed before drawing sweeping conclusions.
  2. “AI makes you stupid” is too broad
    The Gizmodo headline (“AI will make us morons”) is sensational. The reality is more nuanced: AI can dull your mind — but only if you use it passively or uncritically. It’s a tool. It can also enhance your thinking, if used wisely.
  3. Not everyone uses AI the same way
    Some people use ChatGPT as a brainstorming partner, a writing coach, or a way to better understand complex topics — all of which can be mentally engaging. Others use it to skip the thinking part. The outcome depends on the user’s intent and behavior.
  4. Moral panic vs. thoughtful caution
    There’s a fine line between raising the alarm and stoking fear. It’s important to investigate AI’s effects, but not assume that all AI use leads to “cognitive decline.”

🧠 Final thought:

These articles provide a valuable warning: don’t outsource your thinking. AI is powerful, but it’s only as good as the way you use it. If you’re aware of its limitations and make an effort to stay mentally active, AI can be a great tool — not a trap.

1 2 3 4 6