Science
By 2035, science has been transformed by specialized Tool AIs designed to stay within liability safe harbors. These systems achieve high intelligence in narrow domains while avoiding the autonomy and generality that trigger strict liability. Diagnostic AIs require human validation, protein-folding predictors generate candidates that must be experimentally verified, and hypothesis generators propose theories for human evaluation. All operate with transparency, human oversight, and clear constraints.
Progress comes from coordination among specialized tools, not individual generality. A cancer researcher might combine a diagnostic AI, a literature synthesis tool, and a clinical trial designer, each narrow and interpretable, with humans directing integration and validation.
What’s in use 2035
- Knowledge synthesis & discovery: AI-assisted literature review tools identify relevant publications, map research directions, and surface underexplored areas. Cross-Domain Hypothesis Engines scan for structural analogies across fields, for example, adapting a galaxy formation model to study tumor metastasis, to generate high-risk, high-reward research directions.
- Simulation & design: Advanced in silico simulation platforms model biological, chemical, and physical systems before lab testing.1Jumper, J. et al. (2021). Highly Accurate Protein Structure Prediction with AlphaFold. Nature. https://www.nature.com/articles/s41586-021-03819-2 Automated experiment design tools propose full experimental protocols, including control groups, statistical power calculations, potential failure modes, and recommended lab equipment.
- Verification & reproducibility: Reproducibility infrastructure supports peer review through AI-aided replication checks, figure regeneration, and methodology validation.2Stodden, V. et al. (2018). Enhancing Reproducibility for Computational Methods. Science. https://www.science.org/doi/10.1126/science.aah6168 Epistemic stack infrastructure links high-level claims to raw data and training sources, enabling traceable, auditable research.
- AI-native instruments: Laboratory equipment, from gene sequencers to electron microscopes, integrates Tool AI for self-calibration, optimal setting suggestions, and transparent data pre-processing, turning instruments into active research partners.
- Model reconciliation: “Consilience-as-a-Service” systems identify and resolve inconsistencies between scientific models, enabling paradigm shifts and unifying theories.
What made this possible
- Advances in causal and world-model architectures: Natural science LLMs and graph-based model architectures made scientific reasoning more machine-parsable. The shift from purely correlational models to those with deeper causal reasoning and rudimentary “world models” of physics and biology allowed predictions to be both more robust and physically grounded.
- Improved laboratory automation: Robotic lab systems reduced manual work and enabled much faster iteration cycles in experimental design. While they did not solve the fundamental cost gap between generating and validating hypotheses, they made large-scale testing far more feasible than in the 2020s.
- Standardized reproducibility mandates: Funders and journals began requiring deposition of AI models and their machine-readable epistemic metadata as a condition of publication. This created an interoperable, auditable global research ecosystem, making it possible to verify results and trace ideas back to source data.
- A new IP regime for AI-assisted discovery: Legal and economic frameworks adapted to handle attribution and intellectual property when discoveries were co-generated by humans and AI tools. This shifted incentives toward open collaboration and away from proprietary hoarding, especially for foundational scientific results.
Persistent frictions
- Epistemic convergence risk: Widespread reliance on similar AI models narrowed the exploration space, as researchers were nudged toward already well-mapped lines of inquiry rather than more speculative or unconventional ideas.
- Speed–validation mismatch: Tool AI produced hypotheses far faster than physical labs could test them, creating a “theory glut.” Smaller institutions, lacking access to large-scale automated labs, were left behind, and some academic communities resisted what they saw as an erosion of traditional scholarly craft.
- Validation bottlenecks: Even with robotics, physical validation remained the slowest step in science, and many promising AI-generated leads languished untested for years.
- Strategic opacity and rogue labs: Rumors persisted of state-backed or corporate “moonshot” labs flouting transparency norms, developing semi-autonomous “AI researchers” to pursue breakthroughs competitively and refusing to integrate their claims into the public epistemic stack.
- The fading of human intuition: A generational skills gap emerged between “classical” scientists and those adept at AI interrogation, the ability to critically evaluate and challenge counterintuitive outputs. Over-reliance risked automation bias at a civilizational scale.
- Revolution vs. optimization: A deepening debate questioned whether Tool AI’s extraordinary efficiency in refining existing paradigms was suppressing the kind of “unreasonable” leaps, serendipitous, paradigm-shifting insights, that historically drove the biggest scientific revolutions.
Healthcare
By 2035, Tool AI is embedded across clinical workflows, supporting diagnostics, treatment planning, longitudinal risk analysis, and patient communication, always under human oversight. These systems extend clinician capacity by providing structured, transparent suggestions based on large-scale medical data and patient-specific information.3Singhal, K. et al. (2023). Large Language Models Encode Clinical Knowledge. Nature Medicine. https://www.nature.com/articles/s41591-023-02289-7 4World Health Organization. (2021). Ethics and Governance of AI for Health. https://www.who.int/publications/i/item/9789240029200
They help interpret complex test results, flag unusual patterns, and generate comparative treatment options. In lower-resource settings, they expand access to quality care by standardizing reasoning and surfacing overlooked conditions. Digital twins simulate treatment responses before implementation, showing how medications might interact with a patient’s genetic profile, organ function, and existing conditions. Integrated immune monitoring continuously tracks biological markers to predict disease progression, treatment efficacy, and adverse reactions. All systems provide uncertainty estimates, references, and step-by-step reasoning so recommendations can be challenged and overridden.
What’s in use 2035
- Diagnostic copilots that generate ranked differential diagnoses from both structured (labs, imaging) and unstructured (clinical notes) data, flagging anomalies and missed conditions, with uncertainty estimates attached.
- Digital twins of patient physiology that simulate treatment responses before clinical application, modeling drug–gene interactions, organ function changes, and comorbidity effects.
- Integrated immune monitoring systems that continuously track biomarkers, such as inflammatory signals, immune cell counts, and antibody levels, to anticipate disease progression and therapy effectiveness.
- Longitudinal risk analysis tools that compile patient history over years, spotting subtle patterns and critical changes before they become clinically apparent.
- AI-generated treatment explanations formatted for both clinicians and patients, detailing rationale, alternatives, expected benefits, and trade-offs in plain language.
- Administrative automation systems that reduce overhead by auto-filling insurance forms, verifying coverage, streamlining billing, and processing prior authorizations in minutes.
- Accelerated drug development pipelines that use AI to design more efficient clinical trials, predict potential interactions or failures earlier, and cut approval timelines from years to months.
What made this possible
- Advances in medical natural language processing and multimodal AI that can integrate lab results, imaging, and narrative records into coherent patient models previously requiring teams of specialists.
- Regulatory requirements for contestability, audit trails, and local validation to ensure AI recommendations remain transparent, overridable, and legally accountable.5Regulatory frameworks for clinical AI have increasingly emphasized human oversight, explainability, and post-deployment monitoring. Examples include the EU’s AI Act, the FDA’s Good Machine Learning Practice guidelines, and WHO’s ethical guidance on AI in health. See: World Health Organization (2021), Ethics and Governance of AI for Health
- Public health agency pilots in underserved regions that demonstrated gains in early detection, treatment throughput, and patient outcomes, providing political and clinical proof of value.
- Institutional adoption of human-in-the-loop protocols in high-stakes settings, combining AI’s accuracy in pattern recognition with human oversight for ethical, patient-centered care.
- Growth of open medical datasets and collaborative model development, giving smaller healthcare systems access to advanced tools while safeguarding privacy through federated learning.6Rieke, N. et al. (2020). The Future of Digital Health with Federated Learning. NPJ Digital Medicine.
Persistent frictions
- Healthcare infrastructure modernization gaps: Many hospitals, especially public ones, still rely on outdated electronic health records and lack the technical staff to integrate AI effectively.
- Automation bias and clinical judgment: Some clinicians over-trust AI outputs in high-pressure environments; others dismiss them outright, even when evidence-based.
- Data representation inequities: Minority and low-resource populations remain underrepresented in datasets, risking uneven system performance and perpetuating disparities.
- Validation maintenance burden: Keeping models up to date with new treatment guidelines, emerging health risks, and evolving best practices is resource-intensive and ongoing.
Education
By 2035, Tool AI is woven into education systems to personalize learning, support teachers, and improve outcomes across a wide range of contexts. These systems adapt content, pacing, and feedback to individual learners, while giving teachers real-time insight into class-wide progress and student-specific needs.Koedinger, K. R. et al. (2015). 7Learning is Not a Spectator Sport. https://journals.sagepub.com/doi/abs/10.3102/0034654314562961 8Luckin, R. et al. (2016). Intelligence Unleashed: An Argument for AI in Education. Pearson. https://www.pearson.com/uk/news-and-policy/news/2016/06/intelligence-unleashed.html 9Beg, S. et al. (2021). EdTech Interventions in Developing Countries. Center for Global Development. https://www.cgdev.org/sites/default/files/edtech-interventions-developing-countries.pdf
In classrooms, Tool AI identifies learning gaps, recommends targeted interventions, and produces differentiated materials based on performance and engagement patterns. Outside school, AI tutors provide multi-language, multi-format support, making high-quality assistance more widely accessible. Adoption accelerated after strong evidence from low-resource contexts showed consistent gains in literacy, numeracy, and retention. Standards for explainability and teacher involvement ensure AI augments, not replaces, human instruction.
What’s in use 2035
- Adaptive learning systems that personalize exercises and content based on detailed student-level data, adjusting difficulty, pacing, and instructional approach to individual progress and learning styles.
- Teacher dashboards that flag students needing attention, recommend intervention strategies, and provide live analytics on both individual and class performance.
- AI-assisted formative assessment tools that evaluate understanding in real time, prompting teachers to adjust explanations or lesson flow based on comprehension gaps and learning momentum.
- AI-generated learning materials tailored to different reading levels, learning preferences, or linguistic backgrounds, maintaining curriculum alignment while meeting diverse needs.
- Curriculum-aligned content generators integrated into school platforms, tracking educational goals and ensuring that all AI-created materials reinforce official learning objectives.
What made this possible
- Advances in educational NLP and reinforcement learning from human feedback (RLHF), enabling adaptive tutoring systems to respond dynamically to individual needs.
- Pilot programs in underserved regions that demonstrated significant improvements in reading, math, and retention, building the evidence base for broader rollout.
- Support from teachers’ unions and education ministries for clear standards ensuring AI systems remain under professional oversight.
- Public funding and philanthropic initiatives that expanded access to open-source educational AI, bridging the affordability gap for schools unable to buy proprietary systems.
- Teacher training programs that built fluency in AI-assisted teaching and set norms for human-in-the-loop integration, keeping educators central to the learning process.
Persistent frictions
- Infrastructure and training gaps: Uneven device access, internet connectivity, and educator training limit consistent uptake, especially in under-resourced areas.
- Privacy and surveillance concerns: Parents, students, and educators worry about data collection, long-term storage, and potential misuse of educational records.
- Over-reliance and shallow engagement: Some learners and teachers depend too heavily on AI recommendations, leading to rote responses and reduced critical thinking.
- Algorithmic bias in assessments: Persistent risk that biased data or design could produce unfair evaluations, especially for marginalized student populations.
Climate/Energy
By 2035, Tool AI is central to climate and energy management, modeling, forecasting, and optimizing systems to support decarbonization and resilience. These tools provide transparent, auditable recommendations to researchers, policymakers, and infrastructure operators, enhancing both day-to-day operations and long-term planning.
In climate science, Tool AI simulates the long-term impacts of policy interventions, identifies region-specific risk patterns, and evaluates mitigation or adaptation strategies. In energy, it integrates renewable sources into power grids, optimizes storage solutions, and coordinates large-scale decarbonization efforts.10Kravitz Research Group. Climate Model Emulators. https://climatemodeling.earth.indiana.edu/research/climate-model-emulators.html 11VROC AI. AI Grid Optimization. https://vroc.ai/industries/power-gas-grid/ 12Mitsubishi Heavy Industries. AI for Materials and Carbon Capture. https://www.mhi.com/products/engineering/co2plants_process.html 13ScienceDirect. AI Interfaces for Public Engagement in Climate Policy. https://www.sciencedirect.com/science/article/abs/pii/S0013935123013348 14Open Energy Modelling Initiative. Manifesto. https://openmod-initiative.org/manifesto.html
What’s in use 2035
- AI-enhanced climate modeling & weather forecasting: Hyperlocal weather predictions accurate enough for farmers to plan harvests and cities to prepare for extreme events. Narrow in scope, accountable through human meteorologists, and verifiable within days. For long-term climate projections, AI emulators rapidly generate scenarios using less compute than traditional models, but political sensitivity over high-stakes forecasts keeps interpretability under scrutiny.
- Smart grid management systems: Balance supply and demand in real time, integrate variable renewables, predict localized generation surpluses (e.g., when rooftop solar will produce excess), and route power efficiently, even to opportunistic uses like EV charging.
- Materials discovery platforms: Accelerate the development of next-generation energy storage, carbon capture materials, and fusion reactor components, the latter breaking the decades-long “20 years away” barrier for commercial fusion.
- Public engagement interfaces: Translate complex climate and energy data into accessible, interactive visualizations, enabling communities to understand local risks and options without requiring technical expertise.
- Fusion energy coordination systems: Manage multi-plant fusion networks, optimizing plasma stability, scheduling maintenance, and coordinating fuel supply across facilities built in the early 2030s.
What made this possible
- Breakthroughs in physics-informed AI: Neural network architectures incorporating physical laws into training improved accuracy while cutting computational requirements for climate and energy modeling.
- Open climate data revolution: Global open-source datasets, combining satellite, ground sensor, and historical records, provided a shared foundation no single organization could have assembled alone.
- Effective policy frameworks: Carbon pricing and renewable standards created market pull for AI optimization. Regulatory sandboxes allowed safe experimentation, and the 2028 International Climate AI Accord standardized data-sharing protocols in over 40 countries.
- Fusion research coordination: The Global Fusion AI Consortium pooled compute and data from national labs, enabling joint optimization of reactor designs and shifting fusion R&D from siloed national programs to a collaborative global effort.
Persistent Frictions
- Data infrastructure disparities: Large regions, particularly in Sub-Saharan Africa and rural Asia, still lack the sensors and monitoring needed for accurate local climate modeling, limiting the benefits of AI-driven planning where they’re most needed.
- Physical infrastructure limitations: Many areas have advanced forecasting and optimization capabilities but cannot implement them due to aging grids, inadequate storage, or slow infrastructure upgrades.
- Accountability and liability challenges: Determining responsibility when AI-assisted recommendations lead to unintended consequences, such as blackouts after a coal plant shutdown, remains legally and politically complex.
- Fusion deployment constraints: While AI solved major plasma physics challenges, the slow pace of plant construction, constrained by capital costs, skilled labor shortages, and regulatory delays, limits scaling to only a few dozen facilities worldwide.
Governance
By 2035, Tool AI supports governments, institutions, and communities in designing policies, simulating tradeoffs, coordinating across stakeholders, and improving transparency. These systems process complexity, highlight risks, surface shared values, and present options in ways diverse stakeholders can understand and debate.
They have enabled the rise of adaptive institutions, governance systems that evolve based on structured feedback, shifting priorities, and local context. Tool AIs monitor compliance, evaluate impacts, and recommend course corrections. In low-trust or high-stakes contexts, they help facilitate negotiation and consensus-building while keeping outputs traceable and contestable. They also improve incentive alignment by modeling stakeholder responses and identifying perverse incentives before policies are enacted.
Rather than aiming for “perfect” decisions, these tools expand decision-making bandwidth, increase institutional legibility, and improve alignment between actors, especially where governance has historically been slow, reactive, or opaque.15Open Policy Simulation Lab. https://policysimulator.eu/ 16Pol.is – Collective Intelligence & Preference Mapping. https://pol.is/home 17Consensus AI. https://consensus.app/
What’s in use 2035
- “Habermas machines” for scalable deliberation: Synthesize millions of citizen inputs into coherent policy options, identify hidden consensus points, and structure debates so participants engage productively. Used for democratic discourse at population scale rather than simple polling.
- AI-supported negotiation tools: Applied in land-use disputes, treaty negotiations, and multi-stakeholder agreements; model competing interests and suggest creative, mutually acceptable compromises human negotiators might miss.
- Policy simulators for second-order effects: Model how decisions ripple across sectors and time, e.g., how housing policy affects transportation patterns or education reforms impact economic mobility.
- Adaptive policy dashboards: Real-time monitoring to track conditions, measure policy impacts, and auto-adjust programs within democratically set parameters.
- Treaty verification systems: Monitor and update compliance mechanisms without compromising sovereignty, offering transparent assessments of commitments while respecting national autonomy.
What made this possible
- Research in Cooperative AI that produced early models for stable negotiation and consensus-building in adversarial or complex settings, forming the foundation for today’s multi-party agreement tools.
- Shared data formats and simulation environments enabling multi-party coordination without centralized control, allowing jurisdictions to test policy interactions while maintaining autonomy.
- Investment in civic infrastructure to develop public-facing tools that maintain transparency and human oversight, preventing AI-assisted governance from becoming a technocratic black box.
- Civic epistemics platforms to track model strengths, weaknesses, and contested areas, creating feedback loops that improve both performance and public trust.
Persistent frictions
- Legitimacy beyond efficiency: Some governments resist delegating decision-support to AI, especially in politically sensitive domains where citizens demand not only optimal outcomes but meaningful participation.
- Strategic gaming: Sophisticated actors manipulate transparent processes, e.g., feeding biased data into public consultations or gaming preference-mapping algorithms to skew outcomes.
- Standardization vs. local adaptation: Over-standardized frameworks can limit local innovation, pressuring communities to adopt “best practice” tools over context-specific solutions.
- Accountability maintenance burden: Sustaining public trust requires constant auditability, cultural sensitivity, and accessible redress mechanisms, all of which demand ongoing investment in human oversight and explanation systems.
Law and Justice
By 2035, Tool AI is integrated across the legal system to assist with research, analysis, and transparency, but not judgment. These systems help lawyers, judges, and legal aid providers identify precedent, analyze statutes, and surface fairness concerns. Outputs are explainable, overrideable, and designed to support legal professionals in navigating complex cases more efficiently and equitably.
Tool AIs are widely used in document analysis, legislative drafting, and legal translation, especially in multilingual or under-resourced jurisdictions. In court, they provide structured summaries and flag procedural risks. In public defenders’ offices, they ease chronic caseload bottlenecks by accelerating fact-checking and document preparation. While they have not replaced human discretion, they have expanded capacity and improved access to fair representation.18Chalkidis, I. et al. (2021). Legal NLP and Deep Learning. https://arxiv.org/abs/2104.08671 19Cowgill, B. et al. (2021). Algorithmic Fairness in Practice. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3683951 20Surden, H. (2020). Artificial Intelligence and Law. https://lawreview.law.ucdavis.edu/issues/53/3/articles/53-3_Surden.pdf 21Zhong, H. et al. (2020). Legal Judgment Prediction Benchmark. https://aclanthology.org/2020.lrec-1.352/
What’s in use 2035
- AI legislative drafting systems: Handle the technical formulation of legislation, enabling lawmakers to focus on policy objectives and representation. These systems check for internal consistency, avoiding contradictory or duplicative laws, and maintain a coherent, unified body of legislation.
- AI arbitration as standard practice: Most contracts now contain AI arbitration clauses. AI arbitrators resolve routine commercial disputes quickly, reducing court workloads and leaving human judges to focus on complex constitutional or criminal matters.
- Statutory analysis tools: Continuously scan legal codebases to flag outdated, ambiguous, or conflicting language that could lead to inconsistent enforcement or legal disputes.
- Public defender AI assistance: Advanced copilots provide legal research, case preparation, and procedural guidance to under-resourced defenders, helping ensure fairer representation across jurisdictions.
What made this possible
- Law-following AI requirements became standard in procurement frameworks, ensuring systems complied with constitutional principles, procedural safeguards, and core legal norms before deployment.
- Transparency and contestability standards guaranteed that AI outputs could be audited and challenged, with mandatory explanation capabilities for lawyers and judges.
- Institutional procurement frameworks prioritized tools that preserved due process, detected bias, and reinforced human responsibility in decision-making.
- Careful role design maintained public trust: systems were deployed as assistants, not autonomous adjudicators, with clear boundaries around authority and mandatory human oversight in consequential decisions.
- Open datasets and domain-specific validation programs established early best practices for explainability, fairness, and accountability in legal AI.
Persistent frictions
- Infrastructure and workflow modernization gaps: Some legal systems still rely on outdated digital infrastructure, slowing adoption and producing uneven performance.
- Automation bias and erosion of critical thinking: Over-reliance on AI-generated summaries can discourage practitioners from questioning legal frameworks or seeking alternative interpretations.
- Perpetuation of historical biases: Models trained on historical legal data risk embedding discriminatory patterns if not rigorously audited and corrected.
- Trust and legitimacy challenges: In communities with histories of legal overreach or mistrust, AI-assisted processes face resistance, regardless of their transparency or demonstrated fairness.
Economy
By 2035, AI-driven automation and robotics have transformed production across manufacturing, agriculture, construction, services, and research. Productivity gains are extraordinary, but so are the risks of extreme wealth concentration. Learning from early warnings, many societies adopted a dual-track approach:
- Redistribution measures, such as universal basic income (UBI) pilots expanded from the late 2020s.
- Predistribution strategies to broaden ownership of AI and robotics capital before inequality became entrenched.
Tool AI systems help design, monitor, and adapt these policies in real time, enabling governments to respond to different economic trajectories, from steady growth to rapid breakthroughs, while preserving legitimacy and stability.22Brynjolfsson, E. et al. (2023). The Productivity J-Curve. https://www.nber.org/papers/w25148 23Banerjee, A. et al. (2020). Universal Basic Income in Developing Countries. https://www.sciencedirect.com/science/article/abs/pii/S0305750X20300670 24Batty, M. et al. (2022). Computational Models for Economic Forecasting. https://link.springer.com/article/10.1007/s10614-022-10371-4 25Standing, G. (2017). Basic Income and How We Can Make It Happen. Penguin. https://www.penguin.co.uk/books/289539/basic-income-and-how-we-can-make-it-happen
What’s in use 2035
- Capital dividend funds: National and regional funds holding equity in AI infrastructure, robotics fleets, and automated production facilities, distributing dividends to citizens as universal basic capital.
- Adaptive economic modeling platforms: Tool AI simulations that project the macroeconomic impacts of policy changes, stress-test UBI and capital-distribution schemes, and assess fiscal stability under varying AI adoption scenarios.
- Personal AI–robot teams: Individually or cooperatively owned AI-robot units capable of autonomous production, allowing owners to earn income without direct labor.
- Targeted UBI programs: Baseline income support where predistribution alone cannot ensure stability, often linked to cost-of-living metrics or specific regional needs.
- Antitrust and platform-neutrality frameworks: Regulations to limit excessive concentration of AI infrastructure and ensure open access to essential AI tools and compute resources.
What made this possible
- Cost reductions in robotics and AI: Advances lowered barriers to automation for individuals, communities, and small firms, not just large corporations.
- Policy foresight and scenario planning: Governments tested strategies for both incremental growth and potential AGI-scale disruption.
- Proactive predistribution policies: Drawing on models like sovereign wealth funds and universal capital access programs, ownership schemes were implemented early to prevent entrenched inequality.
- Tool AI policy co-design: AI-assisted governance tools enabled more agile, evidence-based policy adjustments.
Persistent frictions
- Implementation speed mismatches: Some governments moved too slowly on predistribution, allowing early AI capital concentration.
- Robotics access inequality: AI software is widely available, but advanced robotics remain costly and clustered in certain regions, creating new disparities in productive capacity.
- Platform power consolidation: Despite antitrust measures, control over foundation models and compute resources remains concentrated among a few global players.
- Policy coordination gaps: Divergent national approaches, some prioritizing redistribution, others predistribution, create tensions in trade, taxation, and cross-border investment.
- 1Jumper, J. et al. (2021). Highly Accurate Protein Structure Prediction with AlphaFold. Nature. https://www.nature.com/articles/s41586-021-03819-2
- 2Stodden, V. et al. (2018). Enhancing Reproducibility for Computational Methods. Science. https://www.science.org/doi/10.1126/science.aah6168
- 3Singhal, K. et al. (2023). Large Language Models Encode Clinical Knowledge. Nature Medicine. https://www.nature.com/articles/s41591-023-02289-7
- 4World Health Organization. (2021). Ethics and Governance of AI for Health. https://www.who.int/publications/i/item/9789240029200
- 5Regulatory frameworks for clinical AI have increasingly emphasized human oversight, explainability, and post-deployment monitoring. Examples include the EU’s AI Act, the FDA’s Good Machine Learning Practice guidelines, and WHO’s ethical guidance on AI in health. See: World Health Organization (2021), Ethics and Governance of AI for Health
- 6Rieke, N. et al. (2020). The Future of Digital Health with Federated Learning. NPJ Digital Medicine.
- 7Learning is Not a Spectator Sport. https://journals.sagepub.com/doi/abs/10.3102/0034654314562961
- 8Luckin, R. et al. (2016). Intelligence Unleashed: An Argument for AI in Education. Pearson. https://www.pearson.com/uk/news-and-policy/news/2016/06/intelligence-unleashed.html
- 9Beg, S. et al. (2021). EdTech Interventions in Developing Countries. Center for Global Development. https://www.cgdev.org/sites/default/files/edtech-interventions-developing-countries.pdf
- 10Kravitz Research Group. Climate Model Emulators. https://climatemodeling.earth.indiana.edu/research/climate-model-emulators.html
- 11VROC AI. AI Grid Optimization. https://vroc.ai/industries/power-gas-grid/
- 12Mitsubishi Heavy Industries. AI for Materials and Carbon Capture. https://www.mhi.com/products/engineering/co2plants_process.html
- 13ScienceDirect. AI Interfaces for Public Engagement in Climate Policy. https://www.sciencedirect.com/science/article/abs/pii/S0013935123013348
- 14Open Energy Modelling Initiative. Manifesto. https://openmod-initiative.org/manifesto.html
- 15Open Policy Simulation Lab. https://policysimulator.eu/
- 16Pol.is – Collective Intelligence & Preference Mapping. https://pol.is/home
- 17Consensus AI. https://consensus.app/
- 18Chalkidis, I. et al. (2021). Legal NLP and Deep Learning. https://arxiv.org/abs/2104.08671
- 19Cowgill, B. et al. (2021). Algorithmic Fairness in Practice. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3683951
- 20Surden, H. (2020). Artificial Intelligence and Law. https://lawreview.law.ucdavis.edu/issues/53/3/articles/53-3_Surden.pdf
- 21Zhong, H. et al. (2020). Legal Judgment Prediction Benchmark. https://aclanthology.org/2020.lrec-1.352/
- 22Brynjolfsson, E. et al. (2023). The Productivity J-Curve. https://www.nber.org/papers/w25148
- 23Banerjee, A. et al. (2020). Universal Basic Income in Developing Countries. https://www.sciencedirect.com/science/article/abs/pii/S0305750X20300670
- 24Batty, M. et al. (2022). Computational Models for Economic Forecasting. https://link.springer.com/article/10.1007/s10614-022-10371-4
- 25Standing, G. (2017). Basic Income and How We Can Make It Happen. Penguin. https://www.penguin.co.uk/books/289539/basic-income-and-how-we-can-make-it-happen