Posted on

AI in Sciences and the New Era of Discovery



Share

AI in Sciences and the New Era of Discovery

Artificial intelligence has crossed a decisive threshold in the sciences. The 2024 Nobel Prizes in both Physics and Chemistry were awarded to AI researchers — Geoffrey Hinton and John Hopfield for neural network foundations, and Demis Hassabis, John Jumper, and David Baker for protein structure prediction and computational protein design — marking the first time AI-related work earned Nobel recognition across two disciplines simultaneously. This was not symbolic; it reflected a genuine transformation already underway across drug discovery, materials science, climate modeling, genomics, and fundamental physics. AI-designed drugs are advancing through clinical trials, AI weather models outperform the world's best supercomputer-driven forecasts in minutes rather than hours, and AI systems have discovered more new stable materials in a single study than humanity catalogued in all of prior history. With over $202 billion in global AI venture funding in 2025 and governments committing tens of billions more, scientific AI has moved from proof-of-concept to production. The implications are extraordinary: research timelines that once stretched across decades are being compressed into months, and autonomous laboratories now run experiments around the clock without human intervention.

Drug Discovery Enters Its AI-Native Era

The pharmaceutical industry's adoption of AI represents perhaps the most commercially advanced frontier of scientific AI. Traditional drug development requires 10–15 years and exceeds $2.5 billion per approved drug, with a roughly 90% failure rate in clinical trials. AI is compressing these timelines dramatically. Insilico Medicine's rentosertib (INS018_055), a first-in-class TNIK inhibitor for idiopathic pulmonary fibrosis, went from AI-identified target to Phase I in under 30 months — roughly half the traditional timeline — and completed Phase IIa in 71 patients with dose-dependent improvement in lung function, publishing results in Nature Medicine in June 2025. It stands as the first drug with both its target and molecule designed entirely by AI to reach mid-stage clinical trials.

The pipeline of AI-discovered therapeutics is expanding rapidly. As of early 2026, over 3,000 AI-assisted drugs are reportedly in development, up from just 3 in 2016 and 67 in clinical stages by 2023. According to a BCG analysis of 21 AI-developed drugs that completed Phase I trials, success rates ranged from 80–90%, compared with roughly 40% for traditionally developed compounds. Specific programs illustrate this breadth: Relay Therapeutics' RLY-2608, a mutant-selective PI3Kα inhibitor designed using AI-driven protein dynamics modeling, demonstrated 9.2 months median progression-free survival in Phase 2 for metastatic breast cancer. Schrödinger's physics-ML platform produced zasocitinib (TAK-279), a TYK2 inhibitor now in Phase III for plaque psoriasis through its Takeda partnership. Recursion Pharmaceuticals, after acquiring Exscientia in November 2024 for $688 million — the largest AI drug discovery merger to date — runs six active clinical programs spanning oncology and rare disease, powered by over 60 petabytes of proprietary biological data and the pharma industry's fastest supercomputer.

Big pharma partnerships have reached extraordinary scale. In 2025 alone, 114 AI drug discovery deals were announced with a combined potential value of $43.4 billion, up from 84 deals worth $11.8 billion in 2024. Isomorphic Labs, the Google DeepMind spinoff led by Demis Hassabis, secured deals with Eli Lilly ($45 million upfront, up to $1.7 billion in milestones) and Novartis ($37.5 million upfront, up to $1.2 billion) in January 2024, later expanding the Novartis agreement from three to six targets. The company raised $600 million in its first external funding round in March 2025, one of the largest AI biotech raises ever. Sanofi committed up to $5.2 billion with Recursion/Exscientia for 15 small molecules, while AstraZeneca partnered with CSPC Pharmaceutical ($110 million upfront, up to $5.2 billion) for AI-driven oral drug discovery. Xaira Therapeutics debuted in April 2024 with $1 billion in funding — the largest initial commitment in ARCH Venture Partners' history — co-founded by Nobel laureate David Baker.

Market projections converge on explosive growth. According to McKinsey & Company, AI could generate $60–$110 billion per year in economic value for pharmaceutical and medical products, with generative AI specifically contributing $15–$28 billion in discovery and early R&D alone. The World Economic Forum projects $350–$410 billion in annual value for pharma by 2030. The AI drug discovery market itself is valued at approximately $2.35 billion in 2025, with projections reaching $9–14 billion by the early 2030s at compound annual growth rates of 25–30%.

The FDA issued its first comprehensive draft guidance on AI in drug development in January 2025, establishing a seven-step credibility assessment framework informed by review of more than 500 AI-component submissions received between 2016 and 2023. In December 2025, the agency qualified its first AI-based tool for clinical trials — a cloud-based platform for scoring liver biopsies in NASH/MASH studies. By January 2026, the FDA and EMA jointly published 10 guiding principles for good AI practice in drug development.

AlphaFold Reshaped the Landscape of Structural Biology

AlphaFold 2's prediction of structures for virtually all 200 million known proteins — up from roughly 100,000 experimentally determined structures accumulated over decades — fundamentally changed how biologists approach drug targets. AlphaFold 3, released in May 2024, extended this capability to predict structures and interactions across all of life's molecules: proteins, DNA, RNA, ligands, ions, and small molecules. It achieved a 50% or greater improvement over existing methods for protein-molecule interactions and doubled prediction accuracy for some categories, surpassing physics-based tools on the PoseBusters benchmark. Its code and model weights were released for academic use by November 2024.

The Baker Lab at the University of Washington, where David Baker received the 2024 Nobel Prize in Chemistry for computational protein design, has pushed generative AI for proteins even further. RFdiffusion, published in Nature in 2023, created picomolar-affinity protein binders through pure computation. RFdiffusion3, released in December 2025, designs proteins interacting with any biomolecule at 10× the speed of its predecessor. RFantibody, published in Nature in November 2025, became the first tool to design complete antibody variable regions from scratch, with cryo-EM confirming designs matched computational models. Meanwhile, EvolutionaryScale's ESM3 — a 98-billion-parameter multimodal model trained on 2.78 billion proteins — generated a novel green fluorescent protein equivalent to simulating over 500 million years of natural evolution.

Materials Science Gains 800 Years of Knowledge Overnight

Google DeepMind's GNoME (Graph Networks for Materials Exploration), published in Nature in November 2023, predicted 2.2 million new crystal structures, of which 381,000 were identified as stable — a nearly tenfold increase over the roughly 48,000 previously known stable materials. Among these discoveries were 52,000 layered graphene-like compounds and 528 lithium-ion conductors, representing 25 times more than previously catalogued. External researchers independently synthesized 736 of the predicted structures, confirming the model's reliability. All predictions were contributed to the open-access Materials Project database.

Microsoft's complementary approach demonstrated equally striking speed. Using Azure Quantum Elements, Microsoft and Pacific Northwest National Laboratory screened 32 million potential inorganic materials, narrowing them to 18 promising candidates in approximately 80 hours, and produced a working battery prototype using the novel material N2116 — a solid-state electrolyte that could reduce lithium usage by up to 70% — in under nine months. Microsoft's MatterSim, a pretrained deep learning model covering the entire periodic table at temperatures from 0 to 5,000 K and pressures up to 1,000 GPa, serves as a universal simulator, while its generative counterpart MatterGen, published in Nature in January 2025, proposes new stable materials with desired properties. Together they form a design-validate flywheel that dramatically accelerates materials discovery.

Autonomous Laboratories Make Discovery Physical

Autonomous laboratories are making this computational-experimental loop physical. The A-Lab at Lawrence Berkeley National Laboratory, described in a companion Nature paper to GNoME, combines AI-guided robotics for closed-loop materials synthesis, processing 50–100× more samples per day than a human researcher and producing over 40 new materials autonomously. Chemify opened its Chemifarm facility in Glasgow in June 2025 — a 21,500-square-foot, £12 million automated chemistry factory claiming capability to run 1,000–5,000 reactions versus 10–15 for traditional automation. The Canadian government invested $200 million in self-driving lab development at the University of Toronto's Acceleration Consortium, its largest-ever research grant.

Critical voices offer necessary perspective. UC Santa Barbara researchers published a critique in Chemistry of Materials in April 2024, finding scant evidence for compounds that fulfill the criteria of novelty, credibility, and utility among GNoME's predictions. MIT Technology Review noted in December 2025 that AI materials discovery now needs to move into the real world, highlighting that bridging computational predictions and commercially viable, scalable materials remains the central challenge.

Weather Forecasting Undergoes a Second Revolution

AI weather models have achieved what many meteorologists considered improbable: outperforming the world's gold-standard numerical weather prediction systems at a fraction of the computational cost. Google DeepMind's GenCast, published in Nature in December 2024, outperformed the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble system on 97.2% of 1,320 test targets, rising to 99.8% accuracy beyond 36-hour lead times. It generates a 15-day probabilistic forecast in 8 minutes on a single TPU, compared with hours on supercomputers using tens of thousands of processors. GenCast provides an average of 12 additional hours of advance notice for tropical cyclone tracks.

GenCast built on the success of GraphCast, published in Science in December 2023, which outperformed ECMWF's deterministic HRES system on 90% of verification targets, generating 10-day forecasts in under one minute. Huawei's Pangu-Weather, published in Nature in July 2023, achieved comparable accuracy at 10,000× the speed of conventional ensemble prediction. Microsoft's Aurora, published in Nature in May 2025, operates as a 1.3-billion-parameter foundation model trained on over one million hours of geophysical data, outperforming operational forecasts on 91% of targets at 0.1° resolution (approximately 11 km). ECMWF itself moved its own AI model, AIFS, to operational status in 2024 — the first major meteorological agency to operationalize an AI weather system.

Climate Action Across Multiple Fronts

Beyond weather, AI is advancing climate action across multiple fronts. Argonne National Laboratory used generative AI to design over 120,000 new metal-organic framework candidates for carbon capture in 30 minutes, accelerating discovery by 500% compared with traditional methods. Google DeepMind's data center cooling AI achieved 40% reductions in cooling energy consumption. AI-involved climate tech venture capital reached a record $6.6 billion in 2025, a 59% jump from the prior year, even as broader climate tech deal counts declined.

In agriculture, meta-reviews of 95 studies spanning 2013–2023 document that AI integration delivers average yield increases of 25%, cost reductions of 28%, water savings of 22%, and pesticide reductions of 30–40% through precision application. AI-powered biodiversity monitoring has scaled dramatically: Wildlife Insights' SpeciesNet model, trained on 65 million camera trap images, recognizes approximately 2,500 species across 95 countries, while iNaturalist's AI now identifies over 106,000 taxa from 200 million community observations.

Particle Physics, Gravitational Waves, and Fusion Energy Leverage AI Breakthroughs

At CERN, the CMS Collaboration demonstrated for the first time that machine learning can fully reconstruct particle collisions at the Large Hadron Collider, replacing traditional hand-crafted particle-flow logic with a single trained model. The ATLAS Collaboration published one of the first uses of unsupervised machine learning in an LHC result, searching for anomalous collision events that could signal new physics beyond the Standard Model. With the ATLAS detector producing over 60 terabytes per second from up to one billion collisions per second, AI-based trigger systems using compressed neural networks on FPGAs now make filtering decisions in tens of nanoseconds.

In gravitational wave detection, a September 2025 Science paper described Deep Loop Shaping, a reinforcement learning method developed by Caltech and Google DeepMind that quieted LIGO mirror motions by 30 to 100 times more than traditional noise reduction alone, potentially enabling detection of intermediate-mass black holes and earlier-stage coalescences. Separately, an AI algorithm called Urania, published in Physical Review X in 2025, designed novel interferometric detectors that outperform the best known next-generation designs by more than an order of magnitude.

Fusion Energy Research Accelerates

Fusion energy research has become one of AI's most dramatic application areas. Google DeepMind partnered with Commonwealth Fusion Systems in October 2025 to accelerate development of the SPARC tokamak, using the open-source TORAX plasma simulator and reinforcement learning for real-time control. TAE Technologies, in a decade-long collaboration with Google, achieved stable plasma at over 70 million °C using only neutral beam injection — a first-of-its-kind result published in Nature Communications in April 2025 that drastically simplifies reactor design. AI-controlled plasma experiments have also succeeded on the DIII-D tokamak at Princeton and the TCV at EPFL.

Quantum Computing and AI Converge

Quantum computing and AI are converging rapidly. Google's Willow processor (105 qubits), announced in December 2024, demonstrated below-threshold quantum error correction — error rates decreasing exponentially as qubit count scales — and completed a benchmark task in under five minutes that would take today's fastest supercomputer an estimated 1025 years. IBM's Quantum Nighthawk (120 qubits) targets 15,000 two-qubit gates by 2028. The number of peer-reviewed quantum error correction papers surged from 36 in 2024 to 120 in the first ten months of 2025, and global government quantum funding now exceeds $50 billion.

Genomics Enters the Foundation Model Era

Google DeepMind's AlphaGenome, published in Nature in January 2026, analyzes up to one million base pairs at single-nucleotide resolution, predicting gene expression, chromatin accessibility, splicing, and transcription factor binding. It outperformed external models on 24 of 26 variant effect prediction evaluations. Nearly 3,000 scientists across 160 countries submitted roughly one million requests per day within weeks of release. This complements AlphaMissense, published in Science in 2023, which categorizes effects of missense variants in the 2% of the genome that codes for proteins, while AlphaGenome covers the remaining 98%.

AI-designed gene editors represent a particularly striking advance. Profluent Bio's OpenCRISPR-1, published in Nature in 2025, is the first AI-generated gene editor to successfully edit the human genome. Created using large language models trained on the CRISPR-Cas Atlas — 5.1 million CRISPR-Cas proteins mined from 26 terabases of genomic data — it is more than 400 mutations distant from natural SpCas9 yet achieves comparable activity with higher specificity and a 95% reduction in off-target editing.

Cancer Genomics and Precision Medicine

In cancer genomics, GRAIL's Galleri multi-cancer early detection test, analyzing circulating tumor DNA methylation patterns to detect 50+ cancer types, completed its FDA premarket approval submission on January 29, 2026. The PATHFINDER 2 study of nearly 36,000 participants showed a greater than sevenfold increase in cancer detection rate when Galleri was added to standard screening, with 92% accuracy in predicting cancer signal origin. Tempus AI, which went public in June 2024 and guided 2025 revenue of $1.26 billion (82% growth), now operates a multimodal data library of over 40 million research records and has acquired both Ambry Genetics and Paige to build an end-to-end precision oncology platform.

Single-cell foundation models are transforming how researchers interpret biological complexity at the cellular level. scGPT, published in Nature Methods in August 2024 and pretrained on over 33 million cells, performs cell type annotation, perturbation prediction, and gene network inference. The Human Cell Atlas project now comprises tens of millions of mapped cells, and spatial transcriptomics tools like Spotiphy enhance sequencing-based methods to single-cell resolution. The precision medicine market is projected to grow from $151.6 billion in 2024 to approximately $470 billion by 2034.

Government and Corporate Investment Creates an Infrastructure Arms Race

Global AI venture capital reached $202.3 billion in 2025, roughly quadrupling from $55.6 billion in 2023, with AI capturing approximately half of all worldwide venture funding. AI-specific healthcare and biotech venture investment accounts for a growing share: $11 billion across 348 rounds for AI-ML drug discovery alone in 2025, up from $8.9 billion in 2024. Ten AI drug discovery companies completed IPOs in 2024–2025, raising a combined $1.7 billion.

Government investment is surging worldwide. The U.S. CHIPS and Science Act authorizes $81 billion for the National Science Foundation over five years and nearly 50% growth in the DOE Office of Science budget to $10.8 billion by FY2027. The DOE is building Solstice, a system with 100,000 NVIDIA Blackwell GPUs at Argonne National Laboratory, alongside the Aurora exascale supercomputer already supporting over 70 science projects. The National AI Research Resource (NAIRR) pilot has connected 600+ research teams across 49 states. The European Commission launched RAISE (Resource for AI Science in Europe) in November 2025 with €107 million in pilot funding and plans to double Horizon Europe's annual AI investment to over €3 billion. China's total AI investment in 2025 is estimated at $84–98 billion by Bank of America, with government accounting for up to $56 billion. South Korea committed $23 billion over five years across 12 strategic technology areas including AI.

NVIDIA has become the indispensable infrastructure provider for scientific AI, with its BioNeMo platform for drug discovery available across all major clouds, partnerships with the DOE for exascale computing, and venture investments in companies like Lila Sciences. Eli Lilly announced a $1 billion co-innovation AI lab with NVIDIA in January 2026 for drug discovery. The three U.S. exascale supercomputers — Frontier, Aurora, and El Capitan — are now all operational, with Aurora achieving 10.6 exaflops of AI performance on the HPL-MxP benchmark.

Foundation Models and Generative AI Form the Technical Backbone

The scientific AI stack has rapidly converged on the foundation model paradigm. In chemistry, IBM's SMI-TED processes 91 million SMILES representations with 289 million parameters, while Liquid AI's LFM2 at 2.6 billion parameters was specifically optimized for drug discovery workloads. In biology, Meta's ESM family evolved from ESM-2 (up to 15 billion parameters on 250 million protein sequences) to ESM3 (98 billion parameters trained with over 1024 FLOPs). In materials science, the Matbench Discovery benchmark evaluates 45 competing models, with PET-OAM-XL leading at F1=0.924 and a discovery acceleration factor of 6.075.

Generative AI for molecular design has progressed from early generative adversarial networks and variational autoencoders to sophisticated diffusion models. DiffDock from MIT achieved state-of-the-art blind molecular docking. RFdiffusion and its successors from the Baker Lab demonstrated that diffusion models can design proteins with picomolar binding affinity from scratch. Chroma from Generate Biomedicines, released under an open-source Apache 2.0 license, enables programmable protein generation with sub-quadratic scaling for large complexes. Active-learning-enhanced VAEs have generated validated CDK2 inhibitors achieving nanomolar potency.

Training Data Remains a Critical Bottleneck

Training data remains a critical bottleneck. Key open datasets — the Protein Data Bank (~220,000 structures), ChEMBL (2.4 million bioactive molecules), ZINC-22 (54.9 billion molecules), Materials Project (~150,000 materials), and Open Catalyst datasets (260 million DFT calculations for OC20) — form the backbone of scientific AI training. However, systematic challenges persist: most materials datasets contain predominantly near-equilibrium structures, causing systematic softening in machine learning interatomic potentials. Meta's release of OMat24 (over 100 million DFT calculations) and OC25 specifically addressed the need for non-equilibrium training data.

Explainability and the Accuracy-Interpretability Trade-Off

Explainability remains a fundamental tension. SHAP values and attention visualization are widely used for post-hoc interpretability, while symbolic regression is emerging as a powerful approach for discovering human-readable equations from data. Kolmogorov-Arnold Networks, introduced in 2024, offer a novel inherently interpretable architecture. But the accuracy-interpretability trade-off persists: the best-performing models remain black boxes, which is particularly problematic in FDA-regulated domains where the agency's 2025 draft guidance mandates transparency about model architectures and training data.

Dual-Use Risks and Governance Challenges Demand Attention

The potential for misuse is not theoretical. In a landmark 2022 Nature Machine Intelligence paper, researchers from Collaborations Pharmaceuticals demonstrated that by simply inverting their drug discovery AI to reward toxicity rather than penalize it, the system generated 40,000 potentially lethal molecules in under six hours on a standard desktop computer, including known chemical warfare agents and novel molecules predicted to be more toxic than VX nerve agent.

The reproducibility crisis in science risks deepening with AI. A 2024 Nature analysis warned that ill-informed use of AI is driving a deluge of unreliable or useless research. The UK Royal Society's 2024 report identified insufficient documentation of training data, limited access to computational infrastructure, and lack of explainability as key barriers. During COVID-19, AI-based CT scan diagnostic tools that were hailed as breakthroughs proved non-reproducible across different hospital settings.

Patent law has reached a global consensus that AI cannot be an inventor. The DABUS case — where Dr. Stephen Thaler listed his AI system as inventor — was rejected by the USPTO, European Patent Office, UK Supreme Court, Australian courts, German Federal Patent Court, and Japan's IP High Court. However, the USPTO's February 2024 guidance clarified that AI-assisted inventions are not categorically unpatentable: a human who makes a significant contribution to the claimed invention can qualify as inventor.

The EU AI Act, which entered into force in August 2024 with phased implementation through 2027, includes a scientific research exemption — but only for AI systems developed and used for the sole purpose of R&D, a narrow framing that may exclude research whose outputs later inform commercial products. The U.S. regulatory landscape shifted dramatically when the Trump administration rescinded Biden's comprehensive AI executive order on its first day, replacing it with a deregulatory, innovation-focused approach. All major scientific publishers — Nature, Science, Elsevier, Wiley, IEEE — now prohibit listing AI as an author and require disclosure of AI use in manuscripts.

Universities Race to Build AI-Science Bridges While Talent Drains to Industry

The institutional response to scientific AI has been substantial but faces a fundamental challenge: the researchers who can bridge AI and domain science are extraordinarily scarce and increasingly drawn to industry. In 2011, new AI PhDs split roughly equally between industry (40.9%) and academia (41.6%). By 2022, 70.7% chose industry versus just 20% for academia. Stanford's Fei-Fei Li made a direct appeal to the U.S. president for funding to prevent Silicon Valley from pricing academics out of AI research.

Major institutional investments aim to close this gap. MIT's Schwarzman College of Computing was established to develop bilinguals — individuals skilled in both computing and domain sciences. Stanford HAI trained over 80 congressional staffers and 8,000 government employees. Cambridge's Accelerate Programme for Scientific Discovery funded 13 interdisciplinary projects in 2024. The UK government's AI for Science strategy addresses shortages of research software engineers, HPC specialists, and data stewards.

Democratization of scientific tools through AI offers a counterpoint. AlphaFold has been used by 2 million researchers worldwide. OpenScholar, developed by the Allen Institute for AI and published in Nature, answers science questions more accurately than GPT-4o — and sometimes better than human experts — with all code and data freely accessible. Sakana AI's AI Scientist system can produce a complete research paper for approximately $15 in computing costs, potentially enabling resource-constrained institutions to participate in frontier research. The debate between open science and proprietary AI models continues to intensify: 65.7% of foundational models were open-source in 2023, but the highest-performing models remain closed and industry-controlled.

The Trajectory Points Toward Autonomous Scientific Discovery

The concept of the AI scientist — autonomous systems that generate hypotheses, design experiments, run them, and interpret results — is transitioning from aspiration to early reality. Sakana AI's AI Scientist system, developed with Oxford and UBC, automates the full research lifecycle using large language models, though independent evaluation found significant limitations: 42% of experiments failed due to coding errors, and the system sometimes misclassified established concepts as novel. More convincingly, Coscientist, published in Nature, independently planned and performed complex chemical syntheses — the first instance of a non-human intelligence doing so.

Isomorphic Labs unveiled its AI drug design engine IsoDDE in February 2026, which more than doubles AlphaFold 3's performance on the hardest targets and achieves up to 20× better performance than competing models on antibody benchmarks. Demis Hassabis stated that most diseases could be cured within ten years. Whether this proves prophetic or premature, the direction of travel is clear.

The World Economic Forum named AI for Scientific Discovery as the top emerging technology of 2024, citing its potential to enable discoveries near-impossible otherwise. McKinsey identifies R&D as potentially the most compelling yet least appreciated area for generative AI value creation, with the broader bio-revolution projected to deliver $2–4 trillion annually by 2030–2040. The evidence for timeline compression is mounting: Insilico Medicine's fully AI-designed drug reached Phase IIa in roughly half the traditional timeline, Sonrai Analytics demonstrated 21-month acceleration in target identification and validation, and SandboxAQ compressed neurodegenerative disease research from years to months.

Scientific AI has moved beyond incremental augmentation to become a fundamentally new mode of discovery. The convergence of foundation models trained on billions of molecular structures, exascale computing infrastructure, autonomous laboratories, and unprecedented investment is creating compounding returns: AI discovers materials that enable better batteries, which power the data centers that train better AI models, which discover still more materials. The 2024 Nobel Prizes were not a culmination but a starting signal. The critical challenges ahead — ensuring reproducibility, managing dual-use risks, maintaining scientific openness against commercial pressures, and building the interdisciplinary workforce needed to realize AI's potential — will determine whether this transformation delivers equitable, trustworthy advances or concentrates scientific power in a few well-resourced institutions. The evidence through early 2026 suggests the pace of change will only accelerate: the question is no longer whether AI will transform science, but how wisely we will govern that transformation.

Sources, References and Additional Reading

The following sources informed or are relevant to the analysis presented in this article.

Disclaimer: The information in this article is provided for general informational purposes only and does not constitute legal, regulatory, tax, investment, financial or other professional advice, and should not be relied upon as such. You should obtain independent advice from qualified professionals in the relevant jurisdiction(s) before making any decision or taking any action based on this article. While reasonable efforts are made to ensure that the information is accurate and current, the content may be incomplete, may contain errors, and may become outdated over time. 1BusinessWorld and its contributors make no representations or warranties, express or implied, as to the completeness, reliability, timeliness, or suitability of the content. To the fullest extent permitted by law, 1BusinessWorld and its contributors accept no liability for any loss or damage arising from the use of or reliance on this article. The views expressed are provided for informational purposes only and do not constitute guidance or advice, and do not necessarily reflect the views of 1BusinessWorld or its affiliates.