Hash matching can slow the spread of NCII, but it is not a silver bullet. It only catches known content, struggles with edits and AI variants, & cannot determine consent or context on its own. That is why governance, not just tech, matters. Read CDT’s brief:
AI
📌 At a Glance
• AI companies are buying regular software companies and changing data rules • Your work tools and personal data could be used differently without permission • Fake images of students are becoming a serious problem in schools
113 total posts
• 112 unread
Keyboard shortcuts:
j next, k previous, o open post, e expand/collapse, s save, f follow-up, Enter mark read & advance, ? help
Summary:
• Technology that identifies harmful intimate images has significant limitations
• It only catches known content and struggles with edited or AI-generated versions
• Good policies and human oversight are needed alongside the technology
Summary:
• Sam Altman announced new partnerships for World ID verification with major companies
• MIT Technology Review predicted these developments 4 years ago
• The prediction included warnings about privacy violations in the system
Summary:
• Tech policy experts discussed who should control AI's role in making the internet healthier
• The conversation covered how AI decisions shape our online experiences
• Highlights the debate over who gets to make choices about AI in digital spaces
Summary:
• Investment in humanoid robots jumped to $6.1 billion in 2025, four times higher than 2024
• The increase comes from major improvements in how robots interact with the world
• Shows rapid growth and investor confidence in robot technology
Summary:
• AI has made creating fake content much easier and faster for more people
• The technology barrier is lower, so anyone can make convincing fake content with little effort
• This creates a flood of harmful content that's harder to manage
Summary:
• Training series starts April 22nd about AI industry and data centers
• Will cover policy solutions and organizing strategies to challenge corporate power
• Focuses on how communities can fight back against big tech companies
Summary:
• 15% of students saw fake explicit images of people from their school made with AI
• These deepfakes are becoming a widespread problem in schools across the country
• Shows how AI technology is being misused to harm and embarrass students
Summary:
• 15% of students know about fake explicit images made with AI at their schools
• The real number of these deepfake incidents is likely much higher than reported
• Shows AI abuse in schools is a growing problem that's often hidden
Summary:
• Pentagon's AI guidelines assume humans understand how AI systems make decisions, but they don't
• The real danger isn't AI acting alone, but humans not knowing what AI is actually "thinking"
• This creates serious risks when AI is used in military and defense situations
Summary:
• About 1 in 5 high school students know someone who has dated an AI chatbot
• This shows AI companions are becoming a real part of teenage relationships
• Survey data from 2025 reveals how common these digital relationships have become
Summary:
• Criminals are buying special tools on Telegram that fool banks' face recognition systems
• Crypto scams alone are expected to steal $17 billion in 2025
• Banks and government agencies can't keep up with these new scam methods
Summary:
• Smart glasses with face recognition could let anyone secretly identify people in public
• This technology would enable stalkers, scammers, and others to track people without their knowledge
• People should be able to go about their daily lives without fear of being secretly identified
Summary:
• Researcher presenting study on how people use AI chatbots for mental health support
• Sharing findings at academic conference about emotional uses of AI technology
• Long-term research shows how people rely on AI for mental and emotional care
Summary:
• Companies using AI to cut labor costs are creating lots of low-quality work instead
• AI tools are marketed as doing everything but often fail in real-world use
• Shows gap between AI promises and actual performance in workplace settings
Summary:
• Colorado's AI law regulates how AI makes important decisions about people
• Covers AI use in healthcare and employment decisions
• Expert explains the law actually controls AI in areas that matter most to people
Summary:
• Modern AI horror stories focus on machines that want things, not smart machines
• People aren't scared of AI that knows a lot of information
• The real fear is AI that has its own desires and goals
Summary:
• Technology Review has created a new list of important AI developments
• They're sharing trends and advances in artificial intelligence
• Could help people understand what's happening in AI right now
Summary:
• HIPAA privacy laws don't protect people from AI companies accessing their health data
• AI companies can take and use health information without worrying about these laws
• Current privacy protections have major gaps when it comes to AI
Summary:
• Expert calls for independent testing of AI tools to assess their risks
• Says democratic institutions, not private companies, should decide how AI is released
• Argues that companies have their own interests that may conflict with public safety
Summary:
• AI systems are being used to control and disempower workers
• Researcher says this repeats patterns we've seen with other technologies
• Shows how AI can make workplace conditions worse for employees
Summary:
• Stanford's AI Index provides an annual summary of key AI developments and trends
• Offers a chance to step back and assess progress in the fast-moving AI industry
• Reminds people that AI development is a long-term effort, not a quick race
The Distributed AI Research (DAIR) Institute @dairinstitute.bsky.social 4d ago
AI 60% relevant bluesky
Summary:
• AI researchers are hosting a livestream discussion about AI hype
• Features experts talking with author Carmen Maria Machado
• Event is called Mystery AI Hype Theater 3000 and streams on Twitch
Summary:
• Government asked tech companies for help with cyberattacks and got 'free' upgrades
• Trump administration is doing something similar with AI companies
• Investigation reveals these deals aren't actually free and come with hidden costs
Summary:
• When companies can't oppose all regulations, they create their own weak alternatives
• These company-made rules sound good but have no real enforcement power
• Allows companies to control how they're regulated instead of government oversight
Summary:
• OpenAI released new policy documents that sound like they want more oversight
• Behind the scenes, the company has been pushing against AI regulations
• Shows disconnect between public statements and actual lobbying efforts
Summary:
• Expert warns about increased risk of AI projects failing in government
• Could waste taxpayer money on broken or ineffective AI systems
• May expose the public to various harms from poorly implemented AI
Summary:
• Government agencies are using risky AI tools without proper oversight
• No real systems in place to check if these tools actually work well
• Agencies can't verify if their AI systems are performing as expected
Summary:
• Google's AI search answers are correct about 90% of the time
• With billions of searches daily, this means millions of wrong answers every hour
• Shows how even high accuracy rates can create massive problems at scale
Summary:
• AI news is confusing with conflicting reports about success and failure
• New annual AI report aims to cut through the hype and confusion
• Provides clearer picture of where AI actually stands in 2026
The Distributed AI Research (DAIR) Institute @dairinstitute.bsky.social 6d ago
AI 75% relevant bluesky
Summary:
• Upcoming live discussion about AI language models being used in writing and education
• Features author Carmen Maria Machado discussing impact on publishing industry
• Part of series examining AI hype versus reality in creative fields
Summary:
• Brief post directing people to AI Now Institute's published research
• No specific details about what research or findings are being referenced
Summary:
• AI researcher discusses Anthropic's new security tool called Mythos on TV news
• Warns we lack independent evidence to verify the company's security claims
• Highlights risks of large language models and need for outside verification
Summary:
• AI company Anthropic is building potentially dangerous AI models to protect against them
• This approach assumes the only defense is to create the threat first
• Happening with very little government regulation or oversight
Summary:
• About 12% of people worry excessively about their health
• Using AI chatbots to get health advice can make this anxiety worse
• Shows potential risks of relying on AI for medical guidance
Summary:
• 70-year-old disabled man was denied parole hearing because computer algorithm labeled him risky
• Calvin Alexander is nearly blind and uses wheelchair, making him unlikely to commit crimes
• Shows how automated systems can make unfair decisions about people's freedom
Summary:
• Civil rights group wants government standards to include anti-discrimination protections
• Pushing for testing to catch bias in areas like housing, jobs, and lending
• Aims to prevent automated systems from unfairly treating different groups of people
Summary:
• Civil rights groups responded to government draft rules about testing AI language models
• Organizations are trying to influence how the government evaluates AI systems
• Shows ongoing effort to shape AI regulation before it becomes final
Summary:
• New survey shows Americans dislike AI, including younger generations
• Young people are struggling to find stable jobs in current workplace
• Negative feelings about AI extend beyond older generations as expected
Summary:
• Pentagon is rushing to adopt AI technology without proper safety rules in place
• This creates serious risks that could even be deadly in military applications
• Congress needs to create guidelines and oversight before AI is widely used in defense
Summary:
• Expert says transparency should be the foundation of any national AI rules
• Transparency shouldn't be seen as an extra burden on AI companies
• Clear information about AI systems is necessary for the technology to succeed
Summary:
• Major AI companies have started creating their own safety rules and sharing some information
• New bill would create official federal standards for AI safety
• Would provide clearer guidance and make safety processes consistent across companies
Summary:
• Only 22% of high school students got guidance on their school's AI policy
• But 86% of students used AI tools during the school year
• Shows big gap between AI use and proper guidance in schools
Summary:
• Pentagon rushing to use AI in military without proper safety rules
• Could lead to serious or deadly consequences
• Report calls for Congress to create safety guidelines
Summary:
• OpenAI and other companies are testing ads in their AI chat systems
• AI could become better at persuading people to buy things, raising concerns about manipulation
• The way these companies make money could affect how trustworthy their AI responses are
Summary:
• Ads are coming to AI chat systems, following the same pattern as social media platforms
• This isn't surprising but serves as a warning about where AI is heading
• Companies face strong financial pressure to add ads, making it hard to resist
Summary:
• Microsoft's AI CEO wrote about the complicated reality of building AI systems
• The field is moving very fast with many complex challenges
• Gives insight into what's really happening behind the scenes in AI development
Summary:
• Virtual event on April 17th will introduce research program focused on refugees, migrants and AI
• Program centers on actual experiences of refugees and migrants rather than outside perspectives
• Aims to give refugees and migrants more control over how AI affects their lives
Summary:
• Pentagon is rushing to use AI without proper safety rules
• Could lead to serious mistakes or even deaths
• Congress needs to create guidelines for military AI use
Summary:
• AI researchers are hosting their first live comedy show about AI hype and misleading claims
• The event is in Brooklyn on April 30 with limited tickets available
• All money raised will go to support an AI research institute
Summary:
• AI makes it much easier for companies to alter and reuse fashion model images
• Research shows how this affects the modeling industry
• Raises questions about consent and fair compensation for models
Summary:
• Pamela Anderson says we need to keep questioning AI's influence
• Warns against being controlled or fooled by AI technology
• Research shows AI is already changing how modeling work happens
Summary:
• Military AI system called Maven has only 30% accuracy in targeting
• This low accuracy rate is close to random targeting
• Expert discusses what safety measures could be put in place
Summary:
• Someone explaining that underwater data centers in the 2010s weren't built for cooling computers
• Says they were built because it was a fun idea, like underwater restaurants
• Appears to be correcting a common misconception about why these were created
Summary:
• A coalition is helping government, nonprofits, and companies share knowledge about AI
• The goal is to help government use AI responsibly when providing public services
• Focuses on making sure AI adoption in government is trustworthy and safe
Summary:
• New proposed rules about how government buys and uses AI are concerning
• The rules could let government officials skip important safety protections
• Could lead to expanded mass surveillance of citizens
Summary:
• AI tools are making it easier for businesses to find suppliers and source products
• The time from having a product idea to actually launching it is getting much shorter
• Business owners and experts confirm this trend is happening across e-commerce
Summary:
• AI company Anthropic created a 'constitution' of rules for their AI system Claude
• Writer argues this reflects bigger problems with how America's actual constitution is working
• Shows how tech companies are making their own governance rules for AI systems
Summary:
• Trump and his team are promising AI will transform America
• ProPublica says this messaging follows the same pattern as past tech promises
• Suggests government has made similar big claims about technology before
Summary:
• Recommends reading 'Empire of AI' book and watching related talk
• Suggests following specific experts for critical analysis of AI industry
• Promotes more thoughtful, questioning approach to AI hype and claims
Summary:
• People need better reasons not to publish bad or fake research
• AI can help researchers make sure their work is high quality
• Humans control AI and can train it to do what we need
Summary:
• Researchers need consequences for publishing fake or bad studies
• AI can help people cheat but also help them do better science
• Success depends on what behaviors we reward and how we review research
Summary:
• When problems become technical challenges, they get solved over time
• AI can check research quality in ways human reviewers can't or won't
• AI extends what human reviewers can do, but incentives still matter most
Summary:
• AI is more likely to help people use statistics correctly than to help them cheat
• Most people don't understand statistics well, so AI guidance would be an improvement
• Suggests AI could actually make research more accurate
Summary:
• New rules about government buying AI technology could be problematic
• These rules might let officials skip important safety protections
• Could lead to expanded government surveillance of citizens
Summary:
• Privacy expert asks who is collecting information about users online
• Questions how that personal data gets used to make decisions about people
• Highlights concerns about data tracking and automated decision-making
Summary:
• AI companies are collecting huge amounts of personal information from users
• People share this data because AI tools need to know them well to be useful
• This creates the same privacy problems we've seen with social media platforms
Summary:
• Online platforms use small user behaviors to predict and categorize people
• AI systems are following the same pattern as older internet companies
• Your clicks and actions become signals that shape what you see online
Summary:
• Even simple AI predictions raise important questions about how the results get used
• Making predictions is different from deciding what to do with those predictions
• Companies need better rules for how they use AI-generated insights about people
Summary:
• Companies collect your data for one purpose but then use it for other things
• Users don't expect or agree to these additional uses of their information
• Highlights how personal data can spread beyond its original intended use
Summary:
• AI governance expert says the bigger issue isn't data collection but data use
• Once companies have your information, what they do with it matters most
• Points to need for better oversight of how AI companies handle personal data
Summary:
• Government AI rules require systems to always produce answers even when risky
• Rules also demand vendors follow vague 'unbiased AI principles'
• These requirements are too broad and could make AI tools less safe to use
Summary:
• Privacy groups say proposed government AI contracts have serious problems
• The draft rules could make it harder for agencies to use AI responsibly
• Organizations are pushing back on how the government plans to buy AI services
Summary:
• Major privacy organizations submitted comments on government AI contract rules
• Groups include CDT, EFF, Protect Democracy, and EPIC
• They're responding to proposed terms for how the government buys AI services
Summary:
• Major League Baseball introduced automated system for challenging ball and strike calls
• Early disputes between humans and AI system mirror broader conflicts in society
• Shows how human-AI disagreements in sports reflect larger tensions about AI decision-making
Summary:
• Former government efficiency office worker built an AI tool that made major errors
• The tool incorrectly calculated contract values, sometimes off by millions of dollars
• Shows how AI can produce completely wrong information when analyzing government spending
Summary:
• Government's new AI contract terms could make AI systems less safe
• Could lead to worse outcomes and fewer safety protections
• Companies might be afraid to speak up when AI is used unsafely
Summary:
• Government updated rules for buying AI systems provide more clarity
• But new terms could weaken important safety measures
• Experts worry this could make AI systems more dangerous
Summary:
• New government guidelines help companies work with government on AI
• Focuses on making AI systems fair and unbiased
• Specifically targets large language models like ChatGPT
Summary:
• Researcher calls AI companions 'diet girlfriends' - lacking real social value
• AI chatbots provide temporary comfort but aren't truly fulfilling
• Study shows people use AI for relationships but it's not the same as human connection
Summary:
• AI writing tools claim to keep humans involved but the process is boring and repetitive
• People end up letting the AI do everything without checking it properly
• This creates dangerous situations when humans should be reviewing AI work
Summary:
• AI development causes worry but also has positive potential
• Humans should be seen as partners with AI rather than just data sources
• Promotes movement to free the future through human-AI collaboration
Summary:
• Artist and tech person from Africa avoids using general AI tools
• Says African countries usually get worse technology access than others
• Believes simpler machine learning tools designed for limited resources are more useful
Summary:
• AI facial recognition technology wrongly identified a woman as connected to bank fraud
• She was arrested based on this false identification
• Privacy advocates are speaking out about the dangers of using this technology for law enforcement
The Distributed AI Research (DAIR) Institute @dairinstitute.bsky.social Apr 01, 2026
AI 80% relevant bluesky
Summary:
• New podcast episode about AI data centers and nuclear power
• Discusses infrastructure building and community resistance in Pennsylvania
• Features multiple experts talking about AI hype and real-world impacts
Summary:
• AI Now Institute offers toolkit for data center policy
• Provides training series for people working on these issues
• Links to website and registration for upcoming sessions
Summary:
• Toolkit created with community groups and policymakers
• Provides practical roadmap for transformative policy changes
• Covers wide range of issues from zoning to water and energy use
Summary:
• Hundreds of communities being asked to accept AI data centers
• These facilities harm natural resources, budgets, health, and future prospects
• Current regulations don't go far enough to address the problems
Summary:
• Training series on data center policy starts April 22nd
• Features expert analysis and real-world strategies from organizers
• Co-hosted with Data Center Working Group for local and state activists
Summary:
• AI Now Institute launches toolkit to fight rapid AI data center expansion
• Provides policies to stop, slow, and restrict these facilities
• Includes training series and covers local, state, and federal policy options
Summary:
• People in Nigeria and India are attaching iPhones to their heads to record daily tasks
• Likely being used to train AI systems on how humans do household work
• Shows how AI training data is being collected from people around the world
Summary:
• Health chatbots could help people who can't easily access doctors
• No independent testing exists to prove these AI tools are safe or effective
• Unclear whether potential benefits outweigh the risks of wrong medical advice
Summary:
• Current AI testing methods don't capture real-world impact
• Suggests moving to testing that focuses more on human needs and specific situations
• Could change how we evaluate whether AI systems actually work well
Summary:
• Professor teaching students that data science jobs involve much more than just running code
• Students submitted answers to a simple data problem to demonstrate real-world complexity
• Points out that many businesses operate with similar confusion about their data
Summary:
• CDT hosting a networking event focused on privacy and AI topics
• Event brings together leaders in these fields for discussion
• Tickets available at the door for those who didn't buy in advance
Summary:
• Military adopting an 'AI-first' approach to warfare
• Dispute between Pentagon and AI company Anthropic raises concerns
• Questions whether military AI use is effective, safe, and legal
Summary:
• AI in policing needs proper oversight and transparency rules
• Police officers' judgment cannot be replaced by automated systems
• Civil rights protections should come before convenience of AI tools
Summary:
• AI tools now help police write reports automatically to save time
• These AI-generated reports can influence arrests, charges, and prison sentences
• Errors or bias in the AI could seriously impact someone's freedom
Summary:
• Federal judge stopped government punishment of an AI company
• Company apparently posted on social media before consulting lawyers
• Shows tension between AI companies and government regulation
Summary:
• Special AI chatbots designed for healthcare could help people with limited access to doctors
• Not enough testing has been done to know if they actually help or cause harm
• Could be important for underserved communities but safety concerns remain
Summary:
• Tech companies are suddenly focusing heavily on building robots
• This could eliminate many manufacturing jobs that still exist in the US
• Manufacturing work has already declined over 50 years as jobs moved overseas