Reflective Essay: The Importance of Experimentation and an Open Mind

History can never be truly appreciated in the present moment. It is not uncommon for “history making ideas” to never be influential enough to gain a real footing and therefore never make an impact. When occurrences do not make a visible or memorable impact they are forgotten, never to be seen by the future. Mistakes are buried. Successes are glorified. In this way, the history that makes it to the present day has not only beaten the odds, but also been selected to be remembered. Written history is never the complete story, only what was deemed important or influential enough to be worth saving is saved. However in the present day we often look for the complete picture. What really happened? Why were some cultures more powerful than others? These questions have led historians to look for the buried truth. They do not simply look at the path that the river has carved, but they search for the water’s attempt at something new. This treasured history can be interpreted in numerous different ways simply depending on the scholar who wrote it. In a similar manner, scientific results have often been contested. I believe that the key difference between the two dominions lies in the fact that science can continually be explored – there is no delay between the event and the interpretation. Experiments can always be refined, laws changed, opinions altered, and progression unified. I believe that my biggest take away from the course is there is always more to be learned and there will always be a hidden history that is not widely known. Chemistry was not always its own science and there were many twists along the path to recognition, with several not well known. To give a glimpse into my understanding I will use examples from various time periods throughout recorded history.

The hallmark of early thought and advancement is usually accepted to be the bastion that was Ancient Greece. Individuals like Socrates, Plato, and Aristotle are placed upon a pedestal for their series of logical conclusions regarding the physical and theoretical world. These philosophers dabbled in all manner of thought from the act of being to the breakdown of physical phenomenon. Depth of thought, coupled with relatively formal education, divided the Greeks into classes. The lowest Greek class was the slave class. An additional class of workers, semi-analogous to a middle class, also existed in Greek society. Slaves and the working class were too caught up in their daily lives and too uneducated to contribute to Greek thought. This left the responsibility of scientific progression to the philosophers. The “thinking class” would not necessarily be a detrimental idea, had it not been for the Greek stigma of working with one’s hands. The Greeks believed that the privileged were above working with their hands – which had a catastrophic effect on scientific progression. This stigma created a culture of speculation rather than a culture capable of experimentation, leaving thought to remain theory rather than solidify into progress (1). Untested theory was allowed to survive attacks because logically the ideas made sense. Unfortunately for the progression of science as a whole, the Greek stigma of manual labor never subsided and it was not until after the end of the golden age of Greece that any concrete strides were made. I never knew about the Greeks’ lack of desire to experiment. I had always assumed that the Greeks were so intelligent because they had experiments to support their logical theories.

The Middle Ages was an interesting time for science. The fall of Rome led to the creation of feudal states in Europe. Each individual state was not concerned with the advancement of science – only survival and the accumulation of wealth. This situation gave rise to alchemy, which mostly concerned itself with the transmutation of common metals into gold. The focus on the transmutation theory lasted into the 16th century at which point a progressive change to experimentation was observed. Probierbüchlein, a German book that focused primarily on mining, minerals, and assaying, was responsible for a dramatic shift in experimental procedures for the sciences (2). Under alchemical thought processes, there was no unified method of determining the weight of reagents which left individual alchemists to devise their own “specific” measurements. Probierbüchlein, although not specifically meant for the sciences, stressed the necessity for quantitative measurements and the use of a balance. One of the first recorded instances of a published experiment involving quantitative measurements throughout was written by Giovanbattista della Porta, while describing his methods of distilling the oils that are required for perfume (2). This was not the only contribution to science that came out of the end of the Middle Ages.

Paracelsus, a physician, developed the idea of iatrochemistry, or the use of alchemy to prepare medicines. This in part stemmed from Paracelsus’ “anti-Galen” view of medicine which rejected of the idea of the four humors and adopted a more modern view which involved the concept of sickness coming from outside of the human body. Fundamentally iatrochemistry was an enormous step away from classical alchemy with the art of transmutation at its heart. Paracelsus was experimentally able to create salts from metals. These “new” substances that were created led to an enormous increase in the number of remedies available for doctors to prescribe (2). This ultimately led to a furor to discover new chemical substances and thereby a massive increase in well documented explorative experimentation. Had scientific progress been limited to the method of progression of the 16th century, we would have a scientific understanding strictly grounded in observable results. The 16th century is often regarded as “one in which the technological branch of science progressed while the theoretical side remained relative inactive” (2). Without some background on this statement, this would appear to be a tragic setback, however, the theoretical knowledge that survived into the 16th century was based on old alchemical theories. The idea of having one “branch” of chemistry progress so much further than another was astounding to me. If I were to believe that this was going to happen I would have selected that the theory would progress past the experiments, with the experiments being used to solidify or disprove theories. This has given me a slightly different outlook with which to look at recorded history.

The Renaissance brought about a renewal in human creativity and ingenuity. This is the time period in which chemistry truly became a differentiated branch of science (3). The science began to emerge from the experiments of pharmacists and the theory of physicists. The popularization of chemistry by scientists like Johann Rudolph Glauber and his books Furni Novi Philosophici and Pharmacopoeia Spagyrica were responsible for the simplification of chemical terms and gave lists of chemical recipes, respectively (3). Jean Béguin gave public lectures on chemistry and published a book called Tyrocinium Chymicum, or “The Chemical Beginner” (3). Spurred by the invention of the printing press, chemistry was now available and relatively understandable to a much larger portion of the populace. This phenomenon brought about the emergence of the “scientific amateur” allowing for a much greater number of chemistry experiments to be conducted. The influential triumvirate of Joseph Priestly, Henry Cavendish, and Antoine Lavoisier were incredibly gifted individuals that studied the phenomena of chemistry before the science really solidified. Because chemistry was not a fully developed or universally accepted branch of science there were no degrees being given in it, which leaves Priestly, Cavendish, and Lavoisier to be technically considered amateurs in the field (4). I find something as basic as simplifying scientific language causing a revolution in chemistry to be fascinating. Without the contributions of men like Glauber and Béguin it is possible that chemistry would have remained a mystical art that only very few were able to learn.

The expansion of chemistry could not have been achieved solely with the steps of Glauber and Béguin, the science needed a charismatic poster boy. I believe this charismatic figure rests within Antoine Lavoisier. Lavoisier was born into wealth and power although he rarely concerned himself with political issues, preferring to spend his time working in his laboratory. Lavoisier was the first person to accurately describe the act of burning, which led to the disproving of the phlogiston theory (4). He also wrote a book called Traité Elémentaire which was the first chemistry textbook written in the “new” language of chemistry (4). Although he was well liked, well known, and removed himself from the political arena, Lavoisier had the misfortune of being a French aristocrat during the time of the French Revolution. Although he was executed during the Reign of Terror his legacy endured and he is still seen in a favorable light and his accomplishments cannot be understated.

The road to modern chemistry was not always a smooth one. Unwillingness to abandon old theories to adopt new ones plagued progress. I believe that this is best demonstrated in the work of Svante Arrhenius. He spent many years studying the flow of electric current through dilute salt solutions, but without censoring his finding he would have been unable to have passed his doctorate exam. Arrhenius was eventually adopted by Wilhelm Ostwald and with the help of Van’t Hoff, they were able to find irrefutable proof of Arrhenius’ theory (5). Through unyielding dedication and perseverance Arrhenius was able to prove to the world that what he proposed was correct. Before taking this class I regularly believed that major changes to scientific laws were nearly unanimously accepted. The plight of Arrhenius was proof enough for me to see that there will always be fierce resistance to change. The role of the scientist is to continue working and experimenting until you have the evidence to change the opinions of the critics.

The shift in perspective of experimentation that occurred from the time of the golden age of Greece to the 19th century is what I really enjoyed learning about. History tends to ignore the opposition and praise the majority, giving only one face of the coin. Throughout the semester I have had my opinions changed based on a relatively small amount of revealing reading. In class lectures, often comprised of dozens of outside resources, brought in several unconventional viewpoints on chemical events and made me openly question if the same idea could be applied to any number of other events. I had always assumed that the Greeks had supported their theories with experimental data or something observable, I was completely unaware that experimentation was considered to be a slave’s work and not fit for the thinkers. Similarly, I was unaware of truly how long it took for the ideas of alchemy to die out. These ideas may have acted as a spring board into modern chemistry, but I had always assumed chemistry had been accepted and very shortly after the old ideas faded. Along the same lines, the history I had read regarding chemical law rarely mentions a major opposition to a new idea. The attitude of modern chemistry with the idea that all possible results should be explored and replicated to prove or disprove a theory simply was not present during the fledgling period of chemistry. Reading through various accounts of a time period were essential to my change in opinion. I have a new understanding that what is generalized by history can often be analyzed to unearth an entirely different story and it is not until the entire story is known that the events can truly be interpreted. While the Greek philosophers were incredible thinkers, they made no concrete progress towards the discovery of the phenomena of the world. An act as simple as deciding on a universal language of chemistry not only immediately increased the number of individuals who could experiment with chemistry, but set a precedent that sent ripples through history. The significance of an event is hard to gauge in the present, but the potential impact of scientific results cannot be ignored.

References

All references used were the artifacts I created throughout the semester.

(1). Question #1

(2). Question #3

(3). Question #4

(4). Question #2

(5). Question #5

Not directly cited, but used for general knowledge and understanding:

H.M. Leicester. The Historical Background of Chemistry, Dover Publications, Inc., New York, USA, (1971)

  1. Jaffe. Crucibles: The Story of Chemistry, Dover Publications, Inc., New York, USA, (1976)

In class lectures

Biochemistry and Physiological Study Milestones

1804, Nicolas Theodore de Saussure (1767-1845), showed via quantitative measurements that the carbon in plants came almost entirely from carbon dioxide and that the rest of a plant (aside from mineral) was made up of water (1).

1816, Francois Magendie (1783-1855), tested the idea of a monotonous diet of water and some food (non-nitrogenous) on animals (dogs).

1817, J. Pelletier (1788-1842) and J. B. Caventou (1795-1877), isolated chlorophyll.

1824, William Prout (1785-1850), showed that the acid of gastric juice was muriatic acid (2).

1827, William Prout (1785-1850), developed the idea of three classes of foodstuffs – the saccharine, the oily, and the albuminous.

1835, Theodore Schwann (1810-1882), determined that gastric juice contained a catalyst he called pepsin to breakdown food.

1842, Justus von Liebig (1803-1873), applied his theories of chemistry to animal and human physiology (3).

1845, J. R. Meyer (unknown), plants fixed the energy of sunlight and served later to supply energy to humans.

1845, Louis Mialhe (1807-1886), obtained the enzyme ptyalin from saliva.

1846, Claude Bernard (1813-1878), observed the breakdown of starch, fats, and protein via pancreatic juice.

1849, A. A. Berthold (1803-1861), first experimental proof of endocrine function by transplanting testicular tissue in fowls and showed that he could thus prevent the effects of caponization.

1852, Friedrich Bidder (1810-1894) and Carl Schmidt (1822-1894), confirmed Prout’s announcement on hydrochloric acid in the stomach. Clearly showed the effect of food on the respiratory quotient of animals.

1857, Claude Bernard (1813-1878), isolated glycogen from the liver.

1865, Carl Voit (1831-1908), showed that combination with oxygen was not the first step in energy production, but that a large number of intermediaries were formed from the original food before the final union with oxygen occurred.

1866 – 1873, Carl Voit (1831-1908) and Max Pettenkofer (1818-1901), published papers on animal metabolism under different conditions.

1876, Willy Kuhne (1837-1900), completed the work of Bernard by studying the action of pancreatic juice on proteins and isolating trypsin. Also introduced the phrase enzyme.

1877, E, F, W, Pfluger (1829-1910), proved Voit’s theory on energy metabolism by showing that a rabbit breathing quietly or by forced respiration consumed the same amount of oxygen.

1881, Nikolai Ivanovich Lunin (1854-1937), showed that a small amount of milk must be added to the purified diets of carbs, fats, and proteins in order to keep experimental animals alive.

1883 – 1884, Max Rubner (1854-1932), announced isodynamic law which stated that the three types of foods – carbohydrate, fats, and proteins – were equal in calorific value. Later motified by Rubner with the specific dyanamic action of foods.

1889, C. E. Brown-Sequard (1817-1894), injected testicular extracts into various subjects (including himself) to introduce the idea of a chemical mechanism for control of important processes.

1889, Joseph von Mering (1849-1908) and Oscar Minkowski (1858-1931), showed that the removal of a dog’s pancreas caused a sharp rise in blood sugar.

1890, Emil Fischer (unknown), studies of the structures of purines and polypeptides opened the way for an understanding of nitrogen metabolism.

1895, George Oliver (1841-1915) and Edward Albert Sharpey Schafer (1850-1935), obtained extract of the adrenal gland that had a powerful effect on raising blood pressure.

1897, Eduard Buchner (1860-1917), obtained an extract of yeast that showed fermenting power – ending the debate between “unorganized ferments” and enzymes by realizing that both are enzymes.

1901, Gerrit Grijns (1865-1915), first correctly explained beriberi as a deficiency disease.

1901, Jokichi Takamine (1854-1922), isolated adrenaline and epinephrine (the first isolation of a hormone).

1902, William Maddock Bayliss (1860-1924) and Ernest Henry Starling (1866-1927), discovered secretin (the hormone that stimulates the flow of pancreatic juice).

1905, William Maddock Bayliss (1860-1924), suggested the name hormone.

1907, Axel Holst (1861-1931) and Theodore Frolich (1871-1953), experimentally extended the concept of deficiency diseases to guinea pigs.

1912, Casimir Funk (1884-1967), suggested that beriberi, scurvy, and pellagra were diseases that required the presence of organic nitrogenous bases in the diet for prevention (4).

1914, Edward Calvin Kendall (1886-1972), isolated the hormone thyroxine (5).

1915, E. V. McCollum (1879-1967), showed that rats required at least two substances in the diet – “fat-soluble A” and “water-soluble B”. These were later called vitamin A and B.

 

References

(1). (1). H.M. Leicester. The Historical Background of Chemistry, Dover Publications, Inc., New York, USA, (1971) pp. 220-242.

(2). http://en.wikipedia.org/wiki/William_Prout , accessed 6 April, 2015.

(3). http://en.wikipedia.org/wiki/Justus_von_Liebig , accessed 6 April, 2015.

(4). http://en.wikipedia.org/wiki/Casimir_Funk , accessed 6 April, 2015.

(5). http://en.wikipedia.org/wiki/Edward_Calvin_Kendall , accessed 6 April, 2015.

The Cyclotron

The cyclotron is a particle accelerator that uses an electromagnet and vacuum environment to take an initial voltage and amplify it several times and shoot out particles. It was invented by Ernest Lawrence in 1932 and he was awarded the Nobel Prize in physics for this invention in 1939 (1). Lawrence was said to have initially shot “every available projectile at every available target” in the hope of breaking into and shattering the nuclei of every atom (2). Commonly used projectiles included protons, helium nuclei, as well as deuterons (the nucleus of deuterium atoms). In 1935 Lawrence shot deuterons against the element lithium and obtained helium. Jean and Irene Joliot seemed to have discovered the phenomena of artificial radioactivity by shooting protons, sped up to 600,000 volts, at lithium fluoride crystals. They saw visual flashes of helium atoms striking a special zinc screen. Similarly, Lawrence’s lab discovered 120 artificially produced radioactive substances. One of the more interesting of these was radioactive sodium with a half-life of 15 hours. Radioactive sodium produces gamma rays with energies close to 5.5 million volts, which is almost three times the penetrating power of the gamma rays given off by decaying radium. Radioactive sodium was used in cancer therapy as a substitute for x-rays and radium as well as being used as a radioactive tracer for medical research (2).

 

References

(1). http://en.wikipedia.org/wiki/Cyclotron , accessed 4 April, 2015.

(2). B. Jaffe. Crucibles: The Story of Chemistry, Dover Publications, Inc., New York, USA, (1976) pp. 265-282.

A Glimpse into Russian Chemistry Contributions

During the differentiation of the different branches of chemistry, namely organic and inorganic chemistry, there are several Russian chemists that made important discoveries that progressed the science as a whole. One of these chemists is Alexander Butlerov who in 1864 prepared the first tertiary alcohol – tertiary butyl. More significant than Butlerov were the contributions of V.V. Markovnikov. Markovnikov was ultimately able to group his discoveries and theories together into what is now famously, or infamously depending on your feelings towards organic chemistry, known as Markovnikov’s rule. This rule allows the prediction of a major and minor product of an addition reaction (1). Possibly an even more well-known Russian chemist is Dmitri Mendeleev. Mendeleev is famous for his work with pioneering a reasonable system of classifying the elements into what is now known as the periodic table. Mendeleev ordered his elements via increasing atomic weight which caused him to begin to recognize certain patterns among the known elements. What separated Mendeleev from his contemporary Lothar Meyer, who was also working on organizing the elements, was that Mendeleev focused more on the chemical properties of the elements rather than their physical properties. Both Mendeleev and Meyer left open spaces in their tables for elements that they believe had not been discovered yet, however Mendeleev went so far as to speculate on the properties of these open spaces. He was overwhelmingly successful. His predictions for what is now gallium, scandium, and germanium were incredibly close to the actual properties that were measured once they were finally discovered (2). Arguably there was no focus on the entirety of Russian chemistry at any given time, but Russia produced several key players that had an enormous impact on the progress of chemistry.

 

 

References

(1). http://en.wikipedia.org/wiki/Markovnikov%27s_rule , accessed 11 Mar, 2015.

(2). H.M. Leicester. The Historical Background of Chemistry, Dover Publications, Inc., New York, USA, (1971) pp. 172-198.

Urea

Urea, also known as carbamide, is an organic nitrogen containing compound with a chemical formula of CO(NH2)2. It is often considered to be the chemical that gave birth to organic chemistry (1). Until the early 19th century most people believed in the theory of vitalism, or the belief that life was not subject to the laws of physics or chemistry but that life contained some divine principle that they called the “life spark”. This caused the belief that any chemical found in any living thing, such as proteins and carbohydrates, were not like any of the non-living chemicals (1). With a relatively primitive understanding of chemistry at this time, scientists did not believe that “organic” compounds could be synthesized and that they must be obtained naturally from a living source. The counter-evidence to the vitalism theory came with the discovery and eventual synthesis of urea. Urea was discovered in 1727 by the Dutch physician Herman Boerhaave (2). Boerhaave isolated urea by purifying urine samples from several animals (1). Unfortunately Boerhaave saw himself as a physician, and rightfully so as he is now considered the founder of clinical teaching, and not a chemist and therefore was reluctant to publish his findings. He even was quoted as saying “Nothing was formerly further from my thoughts than that I should trouble the world with anything in chemistry” (3). Eventually Boerhaave did publish his findings, but only after his students published it on his behalf. However because Boerhaave was not concerned with chemistry, these findings were seen by very few and were ultimately forgotten. Some people accredit the discovery of urea to the French chemist Hilaire Rouelle in 1773, 50 years after Boerhaave (2). Gradually Boerhaave is being given the credit he has earned for his discovery. Urea was actually named in 1797 by the French chemists Fourcroy and Vauquelin.

In mammals urea is naturally synthesized in the liver as part of the urea cycle, either from the oxidation of amino acids or from ammonia. Amino acids are brought into the body through the breakdown of foods. Amino acids are mostly used during the synthesis of peptides and proteins, but any excess can be metabolized to produce a small amount of energy (3). Ammonia is a byproduct of the metabolizing of nitrogenous compounds and the buildup of ammonia can raise the pH of cells to toxic levels (2). Because of this potential toxicity problem, the human body will spend energy to convert ammonia to urea which is practically harmless and can be removed through the urine or through sweat. Although urea itself is colorless and odorless, it readily decomposes in water back to ammonia, which is why urine has a characteristic smell. The continued decomposition of the urea in urine is the reason why stale urine is so much more pungent than fresh urine (3). Urea can also be produced industrially through a variety of chemical processes.

Urea was first synthesized by the German chemist Friedrich Wöhler in 1828. Wöhler was able to obtain urea by treating silver cyanate with ammonium chloride:

AgNCO + NH4Cl → (NH2)2CO + AgCl

The creation of urea, an organic compound, from inorganic reactants severely discredited the theory of vitalism. For this discovery, Wöhler is considered by many to be the father of organic chemistry (2). Ultimately this process was not practical for an industrial scale. There are several modern methods of producing urea. The most common is the Bosch-Meiser urea process (2). This process has two major steps: the fast exothermic reaction of liquid ammonia with gaseous carbon dioxide at high temperature and pressure to form ammonium carbamate:

2NH3 + CO2 H2N-COONH4

And the second step in the process – the endothermic decomposition of ammonium carbamate into urea and water:

H2N-COONH4 (NH2)2CO + H2O

Modern technology allows for the almost total recycling of unused carbon dioxide and ammonia. Overall the Bosch-Meiser process is exothermic. The process supplies a majority of its own heat as the heat given off from the first reaction can be used to drive the second reaction (2). Strangely enough the conditions for one step of this process is very unfavorable for the other step, so this process is done through an environmental compromise. The high temperature (190ºC) needed for the second step is compensated for by conducting the process at a high pressure (140-175 bar), which favors the first reaction (2). Due to the length of time that is required for the decomposition of carbamate into urea to reach an equilibrium, synthesis reactors tend to be enormous pressure vessels. The slow urea conversion reaction has the potential for two side reactions that produce impurities. Biuret is formed when two molecules of urea combine with the loss of a molecule of ammonia (2):

2NH2CONH2 → H2NCONHCONH2 + NH3

This side reaction is generally avoided by maintaining an excess of ammonia in the synthesis reactor. Biuret is undesirable in urea fertilizer because it is toxic to some plants. However, it is actually preferred in cattle feed. The second impurity, called isocyanic acid, can result from the decomposition of ammonium cyanate, which is in equilibrium with urea.

NH2CONH2 → NH4NCO → HNCO + NH3

This reaction occurs when the urea solution is heated at low pressure, such as when the solution is being concentrated (2). The reaction products volatilize into vapors and then recombine when everything is condensed into urea, ultimately contaminating the process condensate. Urea can also be made on a relatively small scale in laboratories through the reaction of phosgene with ammonia (2).

COCl2 + 4 NH3 → (NH2)2CO + 2 NH4Cl

Urea has a chemical formula of CO(NH2)2, a molecular weight of 60.05526 g/mol, and a density of 1.323 g/cm3. It is usually found as white crystals or powder. Urea may gradually develop an odor if any moisture reacts with it as it will gradually decompose back into ammonia. It has a melting point of 132.7ºC and no boiling point because it decomposes before it boils. Urea is insoluble in benzene, soluble in concentrated HCl and pyrimidine, and is soluble up to 545 g/L in water at 25ºC. A 10% urea solution has a pH of 7.2. Urea has a vapor pressure of 1.2×10-5 mmHg at 25ºC (4).

In 2012 approximately 184 million tons of urea was produced through industrial processes. More than 90% of this was used in the production of nitrogen-release fertilizers (2). Urea has the highest nitrogen content of all solid nitrogen fertilizers that are in use. Many of the bacteria in soil have the enzyme urease, which is used to catalyze the decomposition of urea into ammonia, or ammonium and bicarbonate ions. Ammonium coupled with nitrate are the major sources of nitrogen for plant growth. This allows for much better crop yields from an area than would be allowed by nature. Urea can also be used as a small supplement to cattle feed. Urea is also used in the chemical industry as a raw material for urea-formaldehyde resins, which are often used during the production of plywood. Urea can also be used to make urea nitrate which is a high explosive that is used industrially, but is also a common ingredient in some improvised explosive devices. It can also be used in selective catalytic reduction (SCR) and non-selective catalytic reduction (NSCR) reactions in automobiles to reduce the various nitrogen oxide pollutants in the exhaust from diesel engines (2). In chemical and medical laboratories urea can: be used as an agent to denature proteins, serve as a hydrogen source for fuel cells, and to make fixed brain tissue transparent to visible light while still preserving fluorescent signals from specific labeled cells (2). Urea-containing creams can be used for medical purposes like the rehydration of the skin and debridement of nails. Strangely enough urea can also be used as a diuretic that is safe and inexpensive. Urea has niche uses as an ingredient in dish soap, a flavor enhancer for cigarettes, an additive to dye baths, and a myriad of other uses (2).

The major strategic importance of urea is its use as a fertilizer. Naturally plants need ammonium to grow and ammonium is not overwhelmingly prevalent in most soils. Nitrogen fertilizers, with the fertilizer with the highest nitrogen content being urea based, introduce a method to increase the ammonium potential of the soil. This artificially produces relatively rich soil with an increased capability to make crops grow. The increased levels of nitrogen in the soil from the fertilizers and the bacteria that transform it into ammonium allow for better crop yields in a smaller area (5). This breakthrough in artificial nitrogen fixation has allowed the Earth to produce more food than would naturally be possible, thereby increasing its carrying capacity. Any country that in any way grows a moderate amount of food should consider urea to be important.

The current price for urea is $297 per metric ton. The price of urea has been on a very gradual decline since it reached its peak in September 2011 at $503.80, with the exception of a spike in price in the spring of 2012 where it peaked at $496.70 in May (6). This trend appears to be predicted to continue as production prices continue to decrease as technology continues to advance.

The MSDS fact sheet for urea states that it has a mutagenic effect on mammalian somatic cells and that prolonged exposure to it can produce organ damage. In case of eye contact, wash eye for at least 15 minutes and seek medical attention. In case of skin contact be sure to flush the skim with plenty of water and cover the irritated skin with an emollient and seek medical attention. If inhaled, move to fresh air or give oxygen and seek medical attention. If urea is ingested it is recommended that you loosen any tight clothing and only get medical attention if symptoms appear. Urea may be combustible at high temperatures. Storage for urea involves keeping it away from heat or any source of ignition. When handling urea it is recommended to wear splash goggles, a lab coat, dust respirator, and gloves (7).

Urea can be analyzed through a variety of methods. It has been shown that thin-layer chromatography (TLC) is a feasible means of determining the presence of urea in solution (4). Urea can also be analyzed via IR spectrophotometry and LC-MS. Urea is often used in conjunction with LC-MS in bottom-up proteomics as a denaturing agent for proteins (8).

Only having immediate access to the information, and thus the viewpoints, of today greatly influences the perceived importance of a substance. Having a 21st century bias in determining the ultimate significance of something that is relatively common is difficult, however, urea can be considered a molecule that has effected cultures in a positive way. Simply put urea, with its ability to be made into fertilizer and maintain its high nitrogen content, has allowed for the growth of not only the populations of individual countries, but the world as a whole. The widely used and accepted use of large scale “industrial” farms is reliant on nitrogen based fertilizers to maintain their level of output. Without fertilizer, the natural nitrogen content of the soil would not be enough to produce the quantity or quality of crops that are now available. For this standpoint it can be seen that urea, through efficient fertilization, has effected culture by allowing the continuation of large scale growth. As countries continue to develop and cities continue to expand, more and more food is ultimately required to be grown to allow for this localization of population. In 2013 China produced 46 million tons of urea, almost one third of the total urea produced that year (9). The Chinese dominance in urea production does not necessarily mean that the Chinese benefit the most from it. Arguably, any nation that produces a sizable about of food with a good crop yield to acreage ratio can be assumed to have benefitted from urea. This may include the United States, China, India, Brazil, and Russia (10).

Ultimately urea has changed society as a whole. Without the ability to introduce additional nitrogen into the soil artificially, the carrying capacity of the earth would be much lower than it is presently. Although not incredibly rare or valuable from a monetary standpoint, urea has made an incredible impact on society that is often overlooked. It has increased crop yields by 30%-50% versus fields not treated with urea fertilizer (11). This has allowed for less land being required for food production and opened up more land for urban development.

 

 

References

 

(1). http://humantouchofchemistry.com/urea-and-the-beginnings-of-organic-chemistry.htm , accessed 8 Mar, 2015.

(2). http://en.wikipedia.org/wiki/Urea , accessed 8 Mar, 2015.

(3). http://www.rsc.org/chemistryworld/podcast/CIIEcompounds/transcripts/urea.asp , accessed 8 Mar, 2015.

(4). http://pubchem.ncbi.nlm.nih.gov/compound/urea#section=Solubility , accessed 8 Mar, 2015.

(5). http://www.education.com/science-fair/article/effect-physical-form-fertilizer-plant/ , accessed 8 Mar, 2015.

(6). http://www.indexmundi.com/commodities/?commodity=urea&months=60 , accessed 8 Mar, 2015.

(7). http://www.sciencelab.com/msds.php?msdsId=9927317 , accessed 8 Mar, 2015.

(8). http://en.wikipedia.org/wiki/Liquid_chromatography%E2%80%93mass_spectrometry#Proteomics.2Fmetabolomics , accessed 9 Mar, 2015.

(9). http://minerals.usgs.gov/minerals/pubs/commodity/nitrogen/mcs-2014-nitro.pdf , accessed 9 Mar, 2015.

(10). http://en.wikipedia.org/wiki/List_of_largest_producing_countries_of_agricultural_commodities , accessed 9 Mar, 2015.

(11). http://www.ipni.net/ppiweb/ppinews.nsf/$webcontents/7DE814BEC3A5A6EF85256BD80067B43C/$file/Crop+Yield.pdf , accessed 9 Mar, 2015.

Early Atomic Laws of Combination

Early Atomic Laws of Combination

 

The early understanding of the combination of atoms was shaky. Theories that would have progressed the understanding or at least pointed discussion in the right direction were ignored by key individuals and it potentially set back chemical understanding for decades. One of the first theories that developed about how atoms combine was published by Proust in 1808. The “law of constant proportions” was the result of Proust’s experimentation with copper carbonate in which he found that the composition, no matter how it was prepared or if it occurred naturally, was fixed. Proust’s idea seemed to hold true for simple compounds, but was ultimately disproven. Dalton assumed that a compound of 2 substances would contain one atom of each constituent. Eventually Dalton was able to develop the idea of “variability of chemical composition” in which most compounds were binary and were made in a one to one ratio, but some elements were allowed to form ternary compounds where one element connected two others. Following Dalton, the work of Berzelius became popular. He ultimately cemented the idea of the “law of multiple proportions”, or the theory that one atom might combine with a variable number of other atoms. Working on atomic theory around the same time as Berzelius was Avogadro. Avogadro came up with the theory that equal volumes of different gasses must contain the same number of particles. This idea allowed him to deduce that the ration of the densities of any two gasses is equal to the ration between the masses of their particles, ultimately giving rise to the idea of “atomic weights”. Avogadro also theorized that elements could bond with themselves. This idea was ultimately too far-fetched for Berzelius and was disregarded which, due to the influence Berzelius had on the chemistry world, meant that it was ignored by all. In 1815 and 1816 William Prout published two anonymous papers suggesting that atomic weights of many of the known elements were whole multiples of the atomic weight of hydrogen, which was unfortunately supported with inaccurate data. One of the last areas of discussion in the selection relates to the composition of acids. Lavoisier had assumed that all acids contained oxygen, even if in small amounts. This was ultimately disproven by Humphry Davy and a new understanding of acids was proposed by Liebig. Liebig began to assume that acids were not compounds of oxygen, but of hydrogen (1). The road to understanding the combination of atoms was a knot of theories that were often supported with research results, but sometimes inaccurate results were published and taken as truth. This act of unintentional misleading in conjunction with the complex nature of atomic interactions is responsible for the delayed acceptance of a unified atomic theory.

 

 

References

 

(1). H.M. Leicester. The Historical Background of Chemistry, Dover Publications, Inc., New York, USA, (1971) pp. 150-171.

Obstacles to Recognition

Svante Arrhenius and Marie Curie are considered to be pioneers of physical chemistry and radioactivity, respectively, were not always considered the great chemists that they are today. Arrhenius originally spent years researching the passage of electrical current through dilute salt solution and he was reluctantly given his doctorate from the University of Uppsala. He knew that he was in possession of a revolutionary theory, but he was forced to censor his findings for his doctorate board. Following the receipt of his doctorate, Arrhenius searched desperately to find an established chemist that saw the potential in his work. After receiving no support from a number of notable chemists, Wilhelm Ostwald enthusiastically took Arrhenius under his wing. Along with Van’t Hoff, the three musketeers spent years preparing to release irrefutable proof of their theories. The opposition was fierce, but ultimately the three musketeers were successful. Marie Curie was faced with a different, deeper rooted obstacle – the male domination of all scientific fields. Initially, many scientific minds contributed Marie Curie’s success with the isolation of a polonium salt to the efforts of her husband Pierre. However, following his tragic death Marie was determined to continue their work. In 1910 Marie successfully isolated the metal radium and was never again considered to be a poor scientist because she was a woman. The problems encountered by Arrhenius may very well be faced today. Often times it is easier to hold on to proven theories and simply shun new theories than look into the validity of a claim. It is probable, however, that there would be less resistance to such claims due to the documentation required to publish an article, as well as the relatively high probability of at least one other group of scientists testing the claim’s validity. The gender barrier faced by Marie Curie is not a major concern anymore, although some bias will always persist. The gender barrier is likely to have been replaced by today’s barriers of racial and religious prejudice (1).

 

 

 

References

(1). B. Jaffe. Crucibles: The Story of Chemistry, Dover Publications, Inc., New York, USA, (1976) pp. 164-196.

Sodium Carbonate

Sodium Carbonate, more commonly known as soda ash, is one of the most widely used compounds in the United States. The importance of sodium carbonate can be clearly seen by the Federal Reserve Board’s incorporation of monthly soda ash production into monthly economic indicators used to monitor the U.S. economy (1). This “cornerstone” salt has been available in relative abundance for millennia. The first use of soda ash has been accredited to the ancient Egyptians. They were able to obtain soda ash from dry lake bed deposits or by burning seaweed and other aquatic plants. The extraction of soda ash from various plants continued until the middle of the 19th century. The name “soda ash” originates from the ancient methods of extraction. “Soda” refers to the plants that grow in salt marshes. “Ash” simply refers to the burning of the plants.

Naturally soda ash can be obtained by the collecting, drying, and subsequent burning of salt-tolerant plants known as halophytes. The ashes of these halophytes are then “lixiviated”, or washed with water, to create the alkali solution (2). The solution is then boiled dry to create the final product. This method, used until the middle 1800s, ultimately does not produce a very pure product. The concentration of soda ash varied depending on several factors, including plant species and geographic location. Sodium Carbonate can also be found in the form of Trona (trisodium hydrogendicarbonate dehydrate or Na3HCO3CO3·2H2O) deposits. In the United States these deposits can be mined in California, Wyoming, and Utah. These deposits come in colorless or white crystal structures that ranks at 2.5 on the Mohs scale. The crystal structure is monoclinic and consists of units of 3 edge-sharing sodium polyhedral, cross-linked by carbonate groups and hydrogen bonds (3).

By the end of the 18th century methods of producing sodium carbonate were simply not sufficient to keep up with the demand in Europe (4). In 1775 the French Academy of Sciences offered a prize for anyone who could synthesize soda ash from salt (5). In 1791 Nicolas Leblanc was able to come up with an effective method to produce soda ash and in the first year he was able to produce over 320 tons per year. Although he answered the call of the French Academy of Science and was successful, he was denied his prize money because of the French Revolution. The Leblanc process is a batch process which starts with sodium chloride, subjects it to various reactions, and ends with sodium carbonate. The process begins with the heating of a sodium chloride/sulfuric acid solution to produce sodium sulfate and hydrogen chloride gas. This reaction occurs by the following chemical equation:

2 NaCl + H2SO4 → Na2SO4 + 2 HCl

This reaction was originally discovered by Carl Scheele in 1772. However, the reason this process is referred to as the Leblanc process and not the Scheele process is because Leblanc was able to solve the mystery of the reactions to form the end product of sodium carbonate. The second step in the Leblanc process was mixing the sodium sulfate “salt cakes” with crushed limestone (calcium carbonate) and coal and firing the mixture (5). This reaction ultimately occurs in two steps: in the first, the coal is oxidized into carbon dioxide which reacts with the sodium sulfate and reduces it to sulfide; the second step occurs as the calcium and sodium swap their ligands to make a more thermodynamically favorable combination of sodium carbonate and calcium sulfide (5). This mixture is often referred to as “black ash”. These two reactions can be expressed by the following chemical equations:

Na2SO4 + 2 CNa2S + 2 CO2       (Reduction of sulfate)

Na2S + CaCO3Na2CO3 + CaS     (Thermodynamically favorable recombination)

The black ash was then washed with water. The water wash was evaporated to yield sodium carbonate. The extraction portion of the Leblanc process was termed lixiviation (5). The downside of the Leblanc process is the cost of the reagents and the harsh byproducts, namely hydrogen chloride gas. In 1811 Augustin Jean Fresnel discovered that sodium bicarbonate precipitates when carbon dioxide is bubbled through ammonia-containing brine (6). Although this reaction ultimately became key to what is now called the Solvay process, Fresnel did not publish his findings. In 1861, a Belgian chemist by the name of Ernest Solvay turned his attention the creating a cleaner, cheaper method of producing sodium carbonate. Solvay’s solution was to use an 80-foot tall gas absorption chamber in which carbon dioxide bubbled through a descending flow of brine, together with an efficient method of recovering and recycling ammonia. The net reaction of the Solvay process is expressed by:

2 NaCl + CaCO3 → Na2CO3 + CaCl2

However, the individual steps of the reaction are rather complex. The first step of the process consists of bubbling ammonia (NH3) through a brine solution. The second step takes the ammoniated brine solution and bubbles in carbon dioxide (CO2), which causes sodium bicarbonate (Na2HCO3) to precipitate. These steps can be summarized by the following:

NaCl + CO2 + NH3 + H2O → NaHCO3 + NH4Cl

The sodium bicarbonate is then filtered from the ammonium chloride (NH4Cl) solution and is reacted with quicklime (calcium oxide (CaO)).

2 NH4Cl + CaO → 2 NH3 + CaCl2 + H2O

Sodium carbonate (Na2CO3) is formed through calcination of the previous products:

2 NaHCO3 → Na2CO3 + H2O + CO2

This process is considered to be relatively efficient because a very small amount of ammonia is needed because it is regularly recycled after each reaction. One major byproduct of the Solvay process is calcium chloride, which is usually used as road salt (6). A majority of the world’s soda ash supply is made using the Solvay process.

Sodium carbonate has a chemical formula of Na2CO3, a molecular weight of 105.988 g/mol and a density of 2.5 g/cm3. Pure soda ash is a grayish-white, odorless powder. It has a melting point of 856ºC and no boiling point because it begins to decompose before it boils. Sodium carbonate is insoluble in ethanol and is soluble up to 30g/100mL H2O at 20ºC. Spectral analysis yielded an index of refraction of 1.535 (7).

Soda ash has developed numerous applications over the years. The majority of soda ash produced in the world is used in the manufacture of glass. In this process the sodium carbonate acts as a flux for silica, thereby lowering the melting point of the mixture to reasonable temperature. Sodium carbonate can also act as a relative strong base: it is often used as a pH regulator for the action of a majority of film developing agents, in pools it is regularly used to neutralize the corrosive effects of added chlorine and to raise the overall pH of the water, in taxidermy it is added to water to remove flesh from bone, in chemistry it is used as a primary standard for acid-base titrations because it is solid at room temperature and air-stable (making it easy to weigh), and it is used as a water softener for laundry (2). Sodium carbonate also plays an important role in maintaining the homeostasis of acid-base reactions in biological systems, most importantly blood pH.

The strategic importance of sodium carbonate is questionable. While it can be used in the manufacture of glass, as a cleaning agent, etc., soda ash is in no way considered to be rare. Sodium Carbonate can be made through the Solvay process and found in large deposits around the world. When the scarcity of a substance is nonexistent, the strategic importance of the substance drops drastically, and such is the case with sodium carbonate. Soda ash can currently be bought for anywhere between $289.00 and $354.00 per ton depending on the desired quantity (8).

Sodium carbonate is a relatively safe material according to its MSDS sheet. Sodium carbonate is a skin and eye irritant and can also be hazardous if ingested or inhaled (9). It is non-flammable, but does emit Na2O fumes when heated to decomposition. Sodium carbonate can ignite and burn intensely when it comes into contact with fluoride. It can also reacts explosively when it comes into contact with re-hot aluminum metal. It is recommended to wear gloves, a lab coat, dust respirator, and slash googles when handling sodium carbonate.

From a realistic standpoint, sodium carbonate is used mostly for glass production. From a chemical perspective, the chemical equation that summarizes the creation of glass is:

 

 

 

In this reaction, sodium carbonate is reacted with silica sand (SiO2) at about 1500ºC to produce sodium silicate (Na2SiO3) and carbon dioxide (CO2) (10). When the glass is molten, different elements and compounds can be added to change the color. For example the addition of nickel to the molten glass can result in blue, violet, or even black glass (11). Sodium carbonate can be analyzed in solution using a GC mass spectrophotometer to give mass spectra and a look into the purity of the sample. Soda ash can also be analyzed using an IR spectrophotometer, which may give insight into the different pieces of the molecular structure.

The United States is currently the world’s largest producer of sodium carbonate with more than 11.5 million metric tons produced in 2013 (12). The lion’s share of the soda ash produce is actually mined out of deposits in California, Wyoming, and Utah; a stark contrast from the necessity of the Solvay process in Europe. Due to the abundance of soda ash, whether it be made through the Solvay process or harvested from deposits, it has not really had an effect on society or culture. While it is essential in the process of glass making, it cannot be considered rare or valuable, which generally leads to an absence of public interest. Sodium carbonate is used for a number of purposes in today’s society which allows it to remain a useful compound, but nothing that should be coveted for its value. Sodium carbonate simply lacks the characteristics needed to be considered powerful enough to change the world.

 

References

(1). http://www.ansac.com/products/about-soda-ash/ , accessed 6 February 2015.

(2). http://en.wikipedia.org/wiki/Sodium_carbonate , accessed 6 Feb, 2015.

(3). http://en.wikipedia.org/wiki/Trona , accessed 6 Feb, 2015.

(4). https://www.academia.edu/8035384/Soda_Ash_Production , accessed 6 Feb, 2015.

(5). http://en.wikipedia.org/wiki/Leblanc_process , accessed 6 Feb, 2015.

(6). http://en.wikipedia.org/wiki/Solvay_process , accessed 8 Feb, 2015.

(7). http://pubchem.ncbi.nlm.nih.gov/compound/sodium_carbonate# , accessed 8 Feb, 2015.

(8). http://www.solvaychemicals.us/EN/Products/sodiumproducts/sodaash.aspx , accessed 15 Feb, 2015.

(9). http://www.sciencelab.com/msds.php?msdsId=9927263 , accessed 15 Feb, 2015.

(10). http://www.pilkington.com/pilkington-information/about+pilkington/education/chemistry+of+glass.htm , accessed 16 Feb, 2015.

(11). http://en.wikipedia.org/wiki/Glass_coloring_and_color_marking , accessed 16 Feb, 2015.

(12). http://minerals.usgs.gov/minerals/pubs/commodity/soda_ash/myb1-2013-sodaa.pdf , accessed 16 Feb, 2015.

The Effect of Cultural Changes on the Advancement of Chemistry

The Effect of Cultural Changes on the Advancement of Chemistry

 

Towards the end of the Renaissance human creativity and ingenuity was at a height not seen in many centuries. The idea of natural human curiosity began to finally be seen. This is especially true with the advancement of chemistry. In previous centuries, even in the 1500s, chemistry was always a sub-science of alchemy or iatrochemistry. It was not until the start of 17th century that chemistry began to be treated as a science in its own right. This “new” science appeared through the experimentation of pharmacists and the theorizing of physicians. The popularity of chemistry came from the continued simplification of its language and demystification of its methods. Individuals like Johann Rudolph Glauber, who was self-taught in chemistry, published books like Furni Novi Philosophici, which gave detailed accounts of laboratory apparatuses and chemical operations, and Pharmacopoeia Spagyrica, which gave the recipes for many iatrochemical medicines (1). This surge of detailed information from chemists like Glauber, allowed for the emergence of the “scientific amateur”. In addition to Glauber, Jean Béguin also gave public lectures on chemistry. Béguin also helped to integrate people who were not lifelong academics into discoveries in chemistry by publishing Tyrocinium Chymicum, or “The Chemical Beginner”. Although Béguin mentioned very little theory in his book, he distinguished physicist, physician, and chemist views from each other so that they could be understood separately. In addition to the simplification of the language associated with chemistry, the invention of the printing press was largely responsible for the increased availability of these new chemistry texts (2). The perfect storm of relatively common language and the abundance of cheap chemical literature is what ultimately led to the acceptance and popularization of chemistry.

 

References

(1). H.M. Leicester. The Historical Background of Chemistry, Dover Publications, Inc., New York, USA, (1971) pp. 100-129.

(2). http://en.wikipedia.org/wiki/Printing_press, accessed 5 Feb, 2015.

The Practical Advancement of Chemistry during the 16th Century

The Practical Advancement of Chemistry during the 16th Century

With regards to chemistry, the 16th century is said to be “one in which the technological branch of the science progressed while the theoretical side remained relatively inactive”. Normally this would be considered a setback. However, the theoretical side of chemistry that had emerged from the 14th and 15th centuries was one of alchemical ideas, with the transmutation of gold at its heart. Probierbüchlein, a mining and mineral book written in German that focused on assaying, was responsible for an incredible step forward for science. It was the first book of its kind that stressed the importance of quantitative measurements, specifically the use of a balance. Alchemists had no mention of a method of quantitatively stating any portion of their experiments. The importance of quantitative measurements was cemented when Giovanbattista della Porta, while describing his methods of distilling the oils that are required for perfume, published his yields. The next step forward for chemistry came from Paracelsus. Paracelsus developed the idea of iatrochemistry, or the use of alchemy to prepare medicines, which he referred to as arcana. This was a major step away from what had been considered the major principle of alchemy, which was the understanding and ability to replicate transmutation. Paracelsus went through the process to form salts from metals. These “new” mineral substances that had been created lead to an enormous increase in the number of available remedies that could be used to treat diseases. This actually stimulated the search for new remedies, which ultimately hastened the pace for the discovery of new chemical substances [1].

 

References

[1]. H.M. Leicester. The Historical Background of Chemistry, Dover Publications, Inc., New York, USA, (1971) pp. 84-100.