797485709662519296
ChatGPT’s Impact On Our Brains According to an MIT Study
“The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely ‘soulless.’ The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. ‘It was more like, just give me the essay, refine this sentence, edit it, and I’m done,’ Kosmyna says.”
_
ChatGPT’s OpinionBoth the Time article and the Gizmodo piece raise valid and important concerns, but they also reflect a broader cultural anxiety about AI — what some might call a “techlash.” Here’s a balanced take on both:
🔍 What they get right:
- Critical thinking can erode with passive use
Both articles underscore a key truth: if you let AI do all the work, your brain gets lazy. This isn’t just speculation — it’s backed by cognitive science. Active learning is crucial for retention, deep thinking, and creativity. These studies highlight real risks, especially for students and developing minds.- The analogy to calculators and Google Maps is helpful
Just like calculators didn’t kill math (but overusing them for basic operations can harm number sense), AI isn’t inherently harmful — but it can displace the mental effort that’s necessary for learning.- Originality and creativity suffer
The studies found that ChatGPT-generated essays tended to be “soulless” and formulaic. That’s a real limitation of LLMs — they’re good at mimicking patterns, but not at truly original thought. Relying on them too heavily can flatten thinking.- There’s urgency around education policy
The MIT researcher’s concern about “GPT kindergarten” is not far-fetched. Schools and policymakers are already integrating AI tools into classrooms. These decisions need to be informed by solid research and precaution, especially when developing brains are involved.🤔 What needs caution or nuance:
- Small sample sizes, pre-review papers
Especially in the MIT study, 54 participants is very limited. While EEG data is useful, it doesn’t capture the full picture of intelligence or learning. More peer-reviewed, large-scale studies are needed before drawing sweeping conclusions.- “AI makes you stupid” is too broad
The Gizmodo headline (“AI will make us morons”) is sensational. The reality is more nuanced: AI can dull your mind — but only if you use it passively or uncritically. It’s a tool. It can also enhance your thinking, if used wisely.- Not everyone uses AI the same way
Some people use ChatGPT as a brainstorming partner, a writing coach, or a way to better understand complex topics — all of which can be mentally engaging. Others use it to skip the thinking part. The outcome depends on the user’s intent and behavior.- Moral panic vs. thoughtful caution
There’s a fine line between raising the alarm and stoking fear. It’s important to investigate AI’s effects, but not assume that all AI use leads to “cognitive decline.”🧠 Final thought:
These articles provide a valuable warning: don’t outsource your thinking. AI is powerful, but it’s only as good as the way you use it. If you’re aware of its limitations and make an effort to stay mentally active, AI can be a great tool — not a trap.
787581966072643584
ChatGPT’s Impact On Our Brains According to an MIT Study
“The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely ‘soulless.’ The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. ‘It was more like, just give me the essay, refine this sentence, edit it, and I’m done,’ Kosmyna says.”
_
ChatGPT’s Opinion
Both the Time article and the Gizmodo piece raise valid and important concerns, but they also reflect a broader cultural anxiety about AI — what some might call a “techlash.” Here’s a balanced take on both:
🔍 What they get right:
- Critical thinking can erode with passive use
Both articles underscore a key truth: if you let AI do all the work, your brain gets lazy. This isn’t just speculation — it’s backed by cognitive science. Active learning is crucial for retention, deep thinking, and creativity. These studies highlight real risks, especially for students and developing minds. - The analogy to calculators and Google Maps is helpful
Just like calculators didn’t kill math (but overusing them for basic operations can harm number sense), AI isn’t inherently harmful — but it can displace the mental effort that’s necessary for learning. - Originality and creativity suffer
The studies found that ChatGPT-generated essays tended to be “soulless” and formulaic. That’s a real limitation of LLMs — they’re good at mimicking patterns, but not at truly original thought. Relying on them too heavily can flatten thinking. - There’s urgency around education policy
The MIT researcher’s concern about “GPT kindergarten” is not far-fetched. Schools and policymakers are already integrating AI tools into classrooms. These decisions need to be informed by solid research and precaution, especially when developing brains are involved.
🤔 What needs caution or nuance:
- Small sample sizes, pre-review papers
Especially in the MIT study, 54 participants is very limited. While EEG data is useful, it doesn’t capture the full picture of intelligence or learning. More peer-reviewed, large-scale studies are needed before drawing sweeping conclusions. - “AI makes you stupid” is too broad
The Gizmodo headline (“AI will make us morons”) is sensational. The reality is more nuanced: AI can dull your mind — but only if you use it passively or uncritically. It’s a tool. It can also enhance your thinking, if used wisely. - Not everyone uses AI the same way
Some people use ChatGPT as a brainstorming partner, a writing coach, or a way to better understand complex topics — all of which can be mentally engaging. Others use it to skip the thinking part. The outcome depends on the user’s intent and behavior. - Moral panic vs. thoughtful caution
There’s a fine line between raising the alarm and stoking fear. It’s important to investigate AI’s effects, but not assume that all AI use leads to “cognitive decline.”
🧠 Final thought:
These articles provide a valuable warning: don’t outsource your thinking. AI is powerful, but it’s only as good as the way you use it. If you’re aware of its limitations and make an effort to stay mentally active, AI can be a great tool — not a trap.
786983248365486080
16 Billion Apple, Facebook, Google And Other Passwords Leaked — Act Now
“This is not just a leak – it’s a blueprint for mass exploitation,” the researchers said. And they are right. These credentials are ground zero for phishing attacks and account takeover. “These aren’t just old breaches being recycled,” they warned, “this is fresh, weaponizable intelligence at scale.”
786083744213827584
Steve Jobs on Copycats
Steve Jobs had a famously strong stance against copycats, especially when it came to Apple’s products. Here’s how he dealt with them, both publicly and behind the scenes:
1. Public Condemnation
Steve Jobs didn’t hold back when criticizing companies he felt copied Apple.
- Android: He was particularly angry with Google. In a 2011 biography by Walter Isaacson, Jobs is quoted saying: “I’m going to destroy Android, because it’s a stolen product. I’m willing to go thermonuclear war on this.”
- Samsung: Jobs and Apple executives were very vocal about Samsung copying the iPhone and iPad designs. This led to high-profile lawsuits.
2. Legal Action
Apple, under Jobs, aggressively used intellectual property (IP) laws to protect its designs and innovations.
- Lawsuits: Apple sued Samsung, HTC, and other manufacturers for patent infringement. The case against Samsung became one of the biggest tech lawsuits of the 2010s and led to years of courtroom battles.
- Design and Utility Patents: Apple patented everything from the iPhone’s rounded corners to multitouch gestures. Jobs believed deeply in the value of protecting design as much as function.
3. Doubling Down on Innovation
Rather than just reacting, Jobs pushed Apple to stay ahead:
- He believed design, integration, and user experience could not be easily copied.
- Apple consistently released refined and breakthrough products, making competitors look outdated.
4. Controlling the Ecosystem
Jobs understood that software and hardware integration was a competitive advantage.
- Apple kept iOS closed and tightly integrated with its devices.
- This “walled garden” approach made it harder for competitors to replicate the full experience.
5. Internal Philosophy
Jobs believed in being first and best, not just different.
He once said:
“Good artists copy; great artists steal,” quoting Picasso — but he meant it more as “absorb and innovate,” not imitate.
Summary
Steve Jobs responded to copycats through:
- Public shaming
- Aggressive litigation
- Relentless innovation
- Strategic control of Apple’s ecosystem
He was fiercely protective of Apple’s originality, and made it clear that copying Apple wasn’t just unfair — it was a personal offense.
By ChatGPT
763014747617673216
Motion Designer, Consumer UX — Google Careers
The US base salary range for this full-time position is $122,000-$178,000 + bonus + equity + benefits.