A

rtificial intelligence tools like ChatGPT and GitHub Copilot are reshaping how we learn and work, but a groundbreaking MIT study suggests they might come with a cognitive catch.

By examining how large language models (LLMs) affect essay writing, MIT uncovered potential downsides to over-reliance on AI.

Yet, other research—from consulting to coding and creative writing—shows that when used strategically, AI can supercharge productivity and quality.

This blog dives into MIT’s findings, reconciles contrasting studies, and shares practical ways to harness AI to boost your brainpower rather than dim it.

Read the MIT study at MIT News.

MIT’s Warning: AI’s Impact on Your BrainMIT’s study involved 54 students split into three groups: one used LLMs to write essays, another used search engines for research, and the third relied solely on their brainpower (Brain-only).

Over three sessions, each group stuck to their method, and in a fourth, some switched—LLM users went tool-free, and Brain-only writers tried LLMs. Using brain scans (EEG), essay analysis (NLP), and evaluations by human teachers and an AI judge, the study revealed a stark pattern.  

Brain-only writers showed the strongest neural connections, signaling deep mental engagement. Search engine users had moderate brain activity, while LLM users had the weakest connections, suggesting AI reduced cognitive effort.

When LLM users switched to Brain-only writing, their brains struggled to engage, showing signs of under-activation. Meanwhile, Brain-only writers using LLMs saw boosts in memory and focus, resembling search engine users.

LLM users’ essays were polished but less original, and they felt detached, often unable to recall their own words.

Over four months, LLM reliance linked to lower cognitive engagement and writing quality, hinting that tools like ChatGPT could dull your mental edge if overused.

The Other Side: AI’s Power to Amplify PerformanceWhile MIT highlights cognitive risks, other studies show LLMs shine in specific tasks when guided well, creating an apparent contradiction.

A 2023 study, “Boosting Theory-of-Mind Performance in Large Language Models via Prompting,” found GPT-4 hit 100% accuracy on tasks requiring understanding human mental states, outpacing humans (83%) with tailored prompts. T

his shows LLMs’ reasoning potential, but it focuses on AI output, not user cognition. ToM Study.

Another study, “Are Emergent Abilities of Large Language Models a Mirage?” argues that LLMs’ advanced skills depend on measurement methods, suggesting their capabilities are predictable, not revolutionary.

This aligns with MIT’s view that AI may not enhance human thinking. Emergent Abilities Study.

In professional settings, AI delivers measurable gains. A 2023 experiment with 758 Boston Consulting Group (BCG) employees showed consultants using GPT-4 finished tasks 25% faster, with work rated 42% higher in quality.

Their research and slide-making benefited from AI’s efficiency. GitHub’s analysis found developers using Copilot cut coding time by 55%, improved readability, and reduced errors, with halved time-to-merge and higher job satisfaction.

In competitive coding, “GPT-o3” earned a 2700 Codeforces rating, surpassing most humans. A Science Advances study showed AI-assisted stories were rated 20% more creative, especially for novices, though human curation prevented uniformity. BCG Study; GitHub Blog; Codeforces; Science Advances.

Making Sense of the ConflictMIT’s study focuses on human cognition, showing that leaning too heavily on LLMs like ChatGPT can reduce brain engagement and ownership.

The ToM study highlights LLM performance, not user thinking, while the emergent abilities study suggests AI’s strengths are tool-specific.

Professional studies (BCG, GitHub, etc.) measure AI-augmented output, showing gains when humans guide AI actively. As X posts note, “AI is nice, but does it pay the bills? Yes—when you steer it.”

Together, these suggest AI boosts results with human effort, but passive use risks cognitive complacency, as MIT warns.

How to Beat the AI Brain Drain and Get SmarterChatGPT doesn’t have to make you dumber—it can make you smarter if you use it right. Here are five research-backed strategies to outsmart AI’s pitfalls and amplify your abilities:

Offload Grunt Work, Keep the Big Thinking

    Why: BCG and GitHub show AI excels at repetitive tasks, saving consultants 25% of their time and developers 55% of coding time.  How: Use ChatGPT or Perplexity for quick research or summaries, then analyze and synthesize findings yourself. For slides, try gamma.app to draft, but add your unique spin. Developers can use Copilot for routine code, reviewing suggestions to ensure quality.  Example: A consultant drafts a market report with GPT-4, then refines it with original insights, boosting quality by 42%.

Master the Art of Prompting

    Why: The ToM study showed GPT-4 achieved 100% accuracy with precise prompts, maximizing AI’s reasoning.  How: Write clear prompts, like “List three renewable energy trends with sources” or “Write Python code for a sorting algorithm with comments.” Tweak prompts to refine results, as seen in ToM tasks.  Example: A student prompts ChatGPT for an essay outline, then rewrites it to deepen understanding, avoiding MIT’s detachment issue.

Edit AI for Your Unique Voice  

    Why: Science Advances found AI stories were 20% more creative but needed human curation to stand out.  How: Use AI to brainstorm or draft, then revise heavily to reflect your style. For coding, check Copilot’s suggestions to avoid generic code, as GitHub’s readability gains show.  Example: A writer drafts a story with an LLM, then adds unique twists, leveraging the creativity boost.

Go AI-Free to Stay Sharp  

    Why: MIT found tool-free writing strengthens neural connections, crucial for cognitive growth.  How: Regularly draft essays or solve problems without AI, using tools later to polish. This balances engagement with efficiency.  Example: A developer tackles a Codeforces problem manually, then optimizes with Copilot, aiming for “GPT-o3’s” 2700 rating.

Always Check AI’s Work  

    Why: BCG and GitHub stress human oversight ensures quality, with Copilot’s 53% higher unit test pass rate tied to developer reviews.  How: Verify AI outputs for accuracy and relevance. For coding, use GitHub Advanced Security. For writing, cross-check facts with primary sources.  Example: A consultant uses GPT-4 for a report but fact-checks manually, securing the 42% quality gain.

What This Means for Learning and WorkMIT’s study warns that overusing LLMs in education could dull critical thinking, but BCG, GitHub, and Science Advances show strategic

AI use enhances output. Educators should teach students to prompt effectively, curate outputs, and practice independently to balance efficiency with learning. In workplaces, AI can drive results—like Copilot’s 55% coding speed boost—while human oversight maintains skills.

ChatGPT could make you dumber if you let it take over, but with the right approach, it can make you smarter. MIT’s research highlights the risks of passive AI use, while other studies show the rewards of active engagement. By automating routine tasks, prompting precisely, and staying in control, you can harness AI to work better and think sharper.

Explore the MIT study and try these strategies to beat the AI brain drain.

References  

MIT Study: MIT News  

Theory-of-Mind Study: arXiv  

Emergent Abilities Study: arXiv  

BCG Consultant Study: BCG AI Productivity Study  

GitHub Copilot Analysis: GitHub Blog  

Codeforces GPT-o3 Performance: Codeforces  

Science Advances Creative Writing Study: Science Advances

Posted 
Jun 22, 2025
 in 
Skills For Future
 category

More from 

Skills For Future

 category

View All