Artificial Intelligence (AI)

Artificial intelligence may seem like one of those things that no one asked for, but that everyone uses. With a myriad of applications, it can be helpful in many topics. Personally, I am eager to see what it is capable of in the field of video games. It must be interesting to have “real” conversations with NPCs that can make your experience more immersive. Another area where it can be really helpful is in planning. Have you ever tried to plan a trip, searching through several pages to find the best flight combinations, accommodations, and public transportation options? It would be great to have a tool that can provide excellent suggestions in just a few minutes instead of spending hours looking for them. The way artificial intelligence interprets things is also interesting. It will continue to improve, and sometimes it may seem daunting, but the ability to merge thousands of ideas to create or develop a new one in less than a minute is quite remarkable to witness.

Here, I asked to merge a round-bottom flask, commonly used in chemistry laboratories, with the well-known painting styles of Van Gogh, Monet, or Kandinsky. It generates vivid and colorful images and ideas. In the end, science is nothing more than the place where creativity intertwines with intellect, and where curiosity serves as the guiding compass on our quest for knowledge. With every experiment conducted, scientists add their own strokes of brilliance to the ever-evolving canvas of human progress, illuminating new pathways and shaping a future that holds boundless potential for us all, much like the exquisite paintings of the past. Just as Van Gogh’s brushstrokes breathed life into his masterpieces, scientists utilize their knowledge and expertise to unravel the intricate mysteries of our universe. Just as Monet’s paintings capture the subtle interplay of light and color, scientists strive to unravel the intricate interconnections of the natural world. Just as Kandinsky’s art transcended traditional boundaries, science pushes the limits of our understanding, challenging conventional thinking. The idea of merging all of this with artificial intelligence, at least, looks truly fascinating.

General approaches to nanoparticles synthesis

Several approaches can be followed for the synthesis of nanomaterials. One of the most common synthetic routes to classifies them is in the “top-down” and “bottom-up” methodologies.

 “Top-down”

Through this strategy, the bulk material is reduced in size to produce nanoparticles. Mechanical milling, where the macroscopic material is converted into smaller particles by attrition; or laser ablation, where high-energy laser light is used to remove and vaporize the material from a solid surface, are some of the most applied techniques.

“Bottom-up”

In this approach, the nanoparticles are produced assembling atoms or molecules in a liquid or gas phase. In Flame Synthesis the starting material is evaporated, mixed with fuel and an oxidizing agent, and injected in a flame, forming the nanoparticles within the flame.

Reducing metal precursors in a liquid phase to form nanoparticles, often called wet colloidal synthesis, is also one of the most commons methodologies. It is completed upon mixed solutions of different ions or molecules, under controlled conditions to form the final nanomaterials. Let’s see the colloidal synthesis techniques in more detail.

Nucleation, growth and stability of the nanoparticles

In general, the synthesis of metallic nanoparticles can be divided into two main stages, namely nucleation (related to the formation of a new phase) and growth (which involves the growth of the nuclei in the final NPs). Besides, a stabilisation mechanism must be present; otherwise, the growing NPs inevitably result in the formation of bulk material through aggregation.

  • Nucleation

Paying closer attention to the colloidal synthesis, we can distinguish between heterogeneous and homogenous nucleation.

Heterogeneous nucleation occurs at nucleation sites contacting the liquid or vapour. It has preferential sites as phase boundaries or impurities. This type of nucleation sees as the driving force in the seed-mediated synthesis.

Homogenous nucleation occurs spontaneously and randomly, but requires some factors, as a supersaturation state.

  • Growth

Throughout the last decades, different theories have arisen to explain the phenomena that involve the growth of nuclei in final NPs. All of them can be subclassified into two large families. Growth mediated by atoms or growth mediated by nanoparticles. In the first family, atoms are used as building blocks through surface addition to influence growth. The second family comprises the formation of mesocrystalline nanoparticles in which the growth occur by initial nanoparticles addition rather than by atoms addition.

  • Atom-mediated growth

Within this family, it is possible to classify a multitude of theories that explain, in a consolidated way, the observed phenomenology for the great majority of NPs. The most relevant growth theories are described below:

  • LaMer mechanism, based on their research on aerosols and sulfur hydrosols, propose the separation between the nucleation and growth into three stages. The first stage is the formation of the metallic atoms. When they reach a certain level of saturation (Cmin), the energy barrier for self-nucleation can be overcome, and nucleation starts. Afterwards, there is a drop in the concentration of metallic atoms below the minimum supersaturation level and no more self-nucleation occurs, leading to the growth of the nanoparticles. The Lamer process only describes the nucleation and growth, not answering in the evolution of the NPs, shapes or size distribution.
    • Ostwald Ripening. Taking into consideration that the solubility of nanoparticles is not independent of their size according to the Gibbs-Thomson relation, which states that the nanoparticles become unstable when the size is decreased. The Ostwald ripening theory proposes a mechanism of growing in which the nanoparticles of certain dimension go through a process of re-dissolution, leading to the growth of the big ones due to the new source of monomers. It states that small particles are more soluble than big ones and tend to lose to re-precipitate into larger particles.
    • Digestive ripening, on the other side, can be considered as the opposite of Ostwald ripening. Small particles will grow at expenses of the biggest ones, which will re-dissolve and act as a source of monomers to the smallest nanoparticles. A more complex equation, derived from the Gibbs-Tomson relation, is necessary to explain the digestive ripening. The overall process can be summarised in three steps: 1) The addition of a ligand (e.g. dodecanethiol) that will help to break the larger nanoparticles and narrowing the sizes, 2) the purification of the nanoparticle to obtain an isolate ligand-nanoparticle system and 3) the heating, usually in high boiling point solvents, with the presence of the ligand to obtain monodisperse nanoparticles.
    • The Finke-Watzky mechanism is a simultaneous process where the nucleation and growth happen at the same time, but still follow the condition of the critical size explained in the classical nucleation theory (CNT). The nucleation occurs with a constant rate K1, and the autocatalytic surface growth with a constant rate K2.

    Nanoparticle-mediated growth

    Thanks to the development of modern electron microscopy techniques (among other technologies) in recent decades an increasing number of reports have emerged that inform the synthesis of metal nanoparticles for which it is difficult to explain their growth according to atom mediated theories. The final products obtained by these processes occur through the fusion of nanoparticles formed in the early stages of the reaction, usually evolve into mesocrystalline nanostructures. Attending to the nanoparticles attachment method, nanoparticle-mediated growth can be classified in:

    • Coalescence. During the formation of nanoparticles, they can present coalescence. Coalescence can be seen as the overall nanoparticle growth, where first occurs the aggregation of separate particles, followed by subsequent nanoparticle growth. For this process, there is no preference in the crystallographic planes.
    • In the oriented attachment, the growth involves a self-organization of adjacent particles, sharing a common crystallographic orientation, and following by joining this particles at a planar interface

    Stability

    Once the nanoparticles forms, the colloidal stability in the dispersing medium plays an important role. Van der Waals, electrostatic or magnetic force can lead to nanoparticles agglomeration, aggregation or coalescence.

    • Van de Waals interactions govern nanoparticles aggregation at the most basic level. Permanent or induced dipoles within the nanoparticles can result in net attractive forces between them and posterior aggregation.
    • An electric double layer (EDL) formed by solvated ions or molecules can shield the surface charge, create repulsive interparticle forces and stabilise the system. We can distinguish two parts, the Stern layer, which consists of counter-ions adsorbed on the charged surface of the nanoparticle (NP), and the diffuse layer, an atmosphere of ions of opposite net charge surrounding the NP. The thickness of the EDL is called the Debye length.
    • The DLVO theory (Derjaguin, Landau, Verwey and Overbeek, developers of the theory), for the colloidal stability, assumes that the total force between colloidal particles is composed for the van de Walls forces (attractive) and the EDL (repulsive).
    • On the other hand, the steric stabilisation can be carried out absorbing large molecules at the surface of the particles, which creates a physical barrier that prevents aggregation. It can be used surfactants (e.g. hexadecyltrimethylammonium bromide, CTAB), polymers (e.g. Polyvinylpyrrolidone, PVP), proteins or other types of macromolecules. The stability is now determined by the solubility, average chain length, concentration or temperature, among others.

    There are many other theories, but this summary focuses on the most common ones that are explained in the scientific literature.

    A fast introduction to Sustainable Chemistry.

    Were Anastas and Warner the catalyst within the scientific community for the development of sustainable chemistry. Was in 1998 when they published the 12 principles of green chemistry, which preceded the publication in 2003 by Anastas and Zimmerman of the 12 principles of green chemical engineering. In those publications, it is brought to the attention of scientist and engineers the need for the development of a more sustainable scientific activity. We can list the 12 principles of sustainable chemistry as are listed in the green chemistry section of the American Chemical Society:

    1. Prevention

    It is better to prevent waste than to treat or clean up waste after it has been created.

    2. Atom Economy

    Synthetic methods should be designed to maximize the incorporation of all materials used in the process into the final product.

    3. Less Hazardous Chemical Syntheses

    Wherever practicable, synthetic methods should be designed to use and generate substances that possess little or no toxicity to human health and the environment.

    4. Designing Safer Chemicals

    Chemical products should be designed to affect their desired function while minimizing their toxicity.

    5. Safer Solvents and Auxiliaries

    The use of auxiliary substances (e.g., solvents, separation agents, etc.) should be made unnecessary wherever possible and innocuous when used.

    6. Design for Energy Efficiency

    Energy requirements of chemical processes should be recognised for their environmental and economic impacts and should be minimised. If possible, synthetic methods should be conducted at ambient temperature and pressure.

    7. Use of Renewable Feedstocks

    A raw material or feedstock should be renewable rather than depleting whenever technically and economically practicable.

    8. Reduce Derivatives

    Unnecessary derivatization (use of blocking groups, protection/ deprotection, temporary modification of physical/chemical processes) should be minimised or avoided, if possible, because such steps require additional reagents and can generate waste.

    9. Catalysis

    Catalytic reagents (as selective as possible) are superior to stoichiometric reagents.

    10. Design for Degradation

    Chemical products should be designed so that at the end of their function they break down into innocuous degradation products and do not persist in the environment.

    11. Real-time analysis for Pollution Prevention

    Analytical methodologies need to be further developed to allow for real-time, in-process monitoring and control prior to the formation of hazardous substances.

    12. Inherently Safer Chemistry for Accident Prevention

    Substances and the form of a substance used in a chemical process should be chosen to minimize the potential for chemical accidents, including releases, explosions, and fires.

    Taking a close look at the 12 principles, we can use strategies that help to follow them, by using better solvents, energy sources that can faster the chemical process, obtaining better catalyst or obtaining new sensor that can drive to a fast recognition of toxic analytes. Let’s dig a little deeper into some of them!

    Solvents

    Solvents play an essential role in the reactions. They are used most of the times in more quantity than any reactive species and are one of the most important waste generators. Solvents like hexane or toluene are petroleum derivate, other can be volatile like acetone or flammable like diethyl ether, making dangerous the work with them. The easy dispersion, accumulation and toxicity of chlorinated solvents in the environment is an important source of contamination. A purely solvent-based process, avoiding mixing of solvents, can be a strategy to reduce the waste if we are able, for example, to recover and use it again. However, the use of preferentially water as well as new solvents can also have a positive impact in the environment as in the laboratory results.

    Water has several advantages. It is not dangerous, not toxic, cheap and non-flammable. It allows to work at moderate temperature conditions (0-100 ºC), but also in near critical conditions (near-critical water (NCW), 150-350 ºC, 4-200 bar) or supercritical conditions (supercritical water (SCW), >374 ºC, >221 bar), in which properties as density or polarity are strongly modified.

    However, there are few organic compounds soluble in water, what makes that new solvents must be developed in order to improve the “classic” ones.

    Those new solvents can include ionic liquids, perfluorinated compounds or supercritical fluids. All of them has advantages and drawback, showing that, nowadays, the perfect solvent does not exist, just the most convenient to the chemical process.

    Catalyst

    A catalyst is a compound that affects the reaction rate, speeding up or slowing down (also called inhibitors), but that does not appear in the stoichiometry of the reaction. The catalyst is not consumed during the reaction process and must be regenerated, so it can be reused in the reaction more than once.

    We can classify the catalyst in homogeneous and heterogeneous.

    Homogeneous catalyst

    The catalyst as well as the reagents are in the same phase (usually in dissolution). They can be acids, bases, or Lewis acids like AlCl3, TiCl4 or BF3. The use of metals like Pd, Pt, Fe, Ni, Co or Cu with d orbitals partially occupied have also a widespread use. They have several advantages like a more homogeneous distribution within the solution, or the possibility to use a more specific catalyst.

    One of the main problems is the separation of the catalyst and the reaction products with the consequent waste generation.

    Heterogeneous catalyst

    In this case, the catalyst and the reagents are in different phases (for example the catalyst as a solid and the reagents as a liquid).

    Heterogeneous catalyst presents some advantages, such as an easy way to separate the products and the catalyst, or the thermally and chemically stable materials used. Zeolites are an important type of materials used in several catalytic processes.

    The reaction conditions used to be more severe than with homogeneous catalyst.

    Ultrasounds and Microwaves.

    The use of ultrasounds and microwaves in the synthesis procedure can lead to improved methodologies, with the following lower use of time and energy in the process, making them more suitable for industrial scale.

    Ultrasounds

    Ultrasounds are acoustic waves with more than 20 KHz, until the 10 MHz. The use of high frequency and intensity waves can present several advantages in the synthesis of new products, like faster conversions, mild reaction conditions or lowering the number of steps in a procedure. When the ultrasounds are applied in a liquid, they lead to the formation of microbubbles that collapse and give arise to temperatures in the order of 4.000-6.000 K and pressures of 1.000 – 2.000 bar. They have been used extensively in analytical chemistry. Ultrasounds can be considered as a clean source of energy, with no toxicity and with cheap equipment.

    Microwaves

    Microwaves are electromagnetic waves, with a wavelength range between 1 cm and 1 m. It corresponds to frequencies between 300 and 3.000 MHz. It is possible to introduce thermal energy in chemical reactions with the use of microwaves. It has several advantages like lowering the time of heating, present different temperatures in the mixture or a selective heating.

    Electrochemistry and Photochemistry

    Electrochemistry and photochemistry are called to have an important place in the development of greener process, giving rise to new products and methodologies with fewer wastes and use of energy than the usual procedures.

    Electrochemistry

    When a chemical process causes electrons to move, they give place to the oxidation-reduction (redox) reactions. If we do this process placing different electrodes connected with a saline environmental in a solvent, we are talking about an electrochemical reactor.

    The electrochemical process offers several advantages like mild and easy control of the conditions, almost no risk process and high atom efficiency. Once the electrodes are usually in a dissolution, they can be considered as a heterogeneous catalyst. They are used to avoid the addition of oxidants or reducers that can lead to sources of contamination.

    The main inconvenient can be the high cost of electrochemical reactors for laboratory scale or the high energy consumption in an industrial application.

    Photochemistry

    Taking advantage of the use of light, ultraviolet, visible or near infrared, to produce new chemical products, is a growing field of research. With the reasonable use of light and under certain conditions, it is possible to obtain new results in the chemical synthesis as well as produce or induce interesting properties in some materials. Photovoltaics, the ability to convert sunlight into electricity, can be improved with the use of nanostructures and give a new option for solar cell devices.

    The main advantages of the photochemical process are the possibility to reduce the number of reactive, the use of mild temperatures or being a selective process. Some molecule or materials interact at an electronic level with the light, and this can lead to specific reactions.

    The photochemical process still presents several drawbacks, among them, the low absorption of the radiation or the higher price of apply the photochemistry to obtain the same product in comparison to another methodology.

    Analytical methodologies

    The development of new analytical methodologies is an essential field of research. The identification and quantification of analytes in a fast and reliable way is useful in several areas. The quick identification of toxic analytes, or precursors of toxic analytes, can lead to prevention or an implementation of a necessary action to minimise the adverse effects derived from their presence. For this reason, it is essential to develop new and fast strategies that can avoid the use of expensive equipment. Colourimetric methodologies that can give a response to a particular analyte with colour changes, or related fluorescence process (enhancement or quenching), can be fundamental in an in-situ analysis.

    Toxicity

    As we saw in the previous sections, it is highly vital to take into consideration the toxicity and toxicology effects, not only in the synthesis and manufacturing but also in the applications.

    Metals like mercury where used in the past in several fields, as an antibacterial agent, to control plant diseases in the agriculture or for the manufacturing of hats. The ingestion or exposure to its vapours can lead to memory loss or hyperactivity, among others. Also, the organic mercury compounds, like methylmercury, can cause death. In this sense, the toxicity studies are highly essential to avoid the use of potentially toxic materials.

    Likewise, is well known that the toxicity of some substances depends on the dose, and when some substances may be innocuous, beneficial or even necessary for life at low doses, at high doses may be harmful. A notable case of toxicity can be seen in the chemotherapy compounds. The use of different drugs against cancer disease can have a harmful effect on the healthy parts of the body. An application or combination of new treatments that will eliminate or reduce the use of these drugs without compromising the beneficial effect would be preferred.

    Let’s start from the bottom

    Nanotechnology has always fascinated me as a field that holds great promise and potential for the future. The origins of this exciting field can be traced back to Michael Faraday. It is true that exists several examples of the use of nanomaterials in the past, such as the Lycurgus cup (https://en.wikipedia.org/wiki/Lycurgus_Cup), but was none until the famous “The Bakerian Lecture” in 1857 (https://doi.org/10.1098/rstl.1857.0011) where he started to begin exploring the relationship between the size of gold or other metals and their optical properties, and probably giving birth to what we call today nanotechnology. Several scientists came after Faraday as Richard Zsigmondy (Chemistry Nobel Prize in 1925) and Jean Baptiste Perrin (Physics Nobel Prize in 1926), which helped colloidal science advance and understanding. We cannot dismiss the contribution of Turkevich, Stevenson and Hillier in 1951 with their famous gold nanoparticle synthesis (https://pubs.rsc.org/en/content/articlelanding/1951/DF/DF9511100055), which is still widely used nowadays. Almost a hundred years later, Richard Feynman’s seminal lecture “There’s Plenty of Room at the Bottom” (delivered in 1959, https://web.pa.msu.edu/people/yang/RFeynman_plentySpace.pdf) proposed the possibility of manipulating individual atoms and molecules, which he believed would lead to a new era of technological innovation. Richard Feynman’s groundbreaking work on manipulating atoms and molecules paved the way for the development of nanotechnology. It inspired a new generation of scientists to explore the possibilities of working at the nanoscale.

    It was not until the 1980s that K. Eric Drexler coined the term “nanotechnology” and introduced the concept of molecular manufacturing in his book “Engines of Creation”. It led to a surge of interest in the field and the exploration of new ways to manipulate matter at the nanoscale.

    Over the years, advancements in imaging and microscopy technology have allowed scientists to study and manipulate matter at the nanoscale with greater precision, leading to the discovery of new materials and applications. Today, nanotechnology has found applications in many areas, including medicine, energy, and electronics.

    The history of nanotechnology is a fascinating story of human and scientific progress. As we continue exploring this field, we must remain responsible in maximising its benefits while minimising potential risks.