Sunday 17 April 2016

Renewable energy in the United Kingdom - in past

Renewable energy can be divided into generation of renewable electricity and the generation of renewable heat.From the mid-1990s renewable energy began to contribute to the electricity generated in the United Kingdom, adding to a small hydroelectricity generating capacity. The total of all renewable electricity sources provided for 14.9% of the electricity generated in the United Kingdom in 2013, reaching 53.7 TWh of electricity generated. In the second quarter of 2015, renewable electricity penetration exceeded 25% and coal generation for the first time.

Renewable energy contributions to meeting the UK's 15% target reduction in total energy consumption by 2020, in accordance with the 2009 EU Renewable Directive, was 5.2% in 2013 as measured in accordance with the methodology set out in the Directive.

Interest in renewable energy in the UK has increased in recent years due to new UK and EU targets for reductions in carbon emissions and the promotion of renewable electricity power generation through commercial incentives such as the Renewable Obligation Certificate scheme (ROCs) and Feed in tariffs (FITs) and the promotion of renewable heat through the Renewable Heat Incentive. Historically hydroelectric schemes were the largest producers of renewable electricity in the UK, but these have now been surpassed by wind power schemes, for which the UK has large potential resources.
Renewable heat energy, in the form of biofuels, dates back to 415,000 BP in the UK. Uranium series dating and thermoluminescence dating give evidence to the use of wood fires at the site of Beeches Pit, Suffolk.

Waterwheel technology was imported to the country by the Romans, with sites in Ikenham and Willowford in England being from the 2nd century AD.At the time of the compilation of the Domesday Book (1086), there were 5,624 watermills in England alone, only 2% of which have not been located by modern archaeological surveys. Later research estimates a less conservative number of 6,082, and it has been pointed out that this should be considered a minimum as the northern reaches of England were never properly recorded.In 1300, this number had risen to between 10,000 and 15,000.

Windmills first appeared in Europe during the Middle Ages. The earliest certain reference to a windmill in Europe (assumed to have been of the vertical type) dates from 1185, in the former village of Weedley in Yorkshire which was located at the southern tip of the Wold overlooking the Humber estuary. The first electricity-generating wind turbine was a battery charging machine installed in July 1887 by Scottish academic James Blyth to light his holiday home in Marykirk, Scotland.

In 1878 the world's first hydroelectric power scheme was developed at Cragside in Northumberland, England by William George Armstrong. It was used to power a single Arc lamp in his art gallery.

However, almost all electricity generation thereafter was based on burning coal. In 1964 coal accounted for 88% of electricity generation, and oil was 11%.The remainder was mostly supplied by hydroelectric power, which continued to grow its share of electricity generation as coal struggled to meet demand. The world's first pumped-storage hydroelectric power station, the Cruachan Dam in Argyll and Bute, Scotland, became fully operational in 1967. The Central Electricity Generating Board attempted to experiment with wind energy on the Lleyn Peninsula in Wales during the 1950s, but this was shelved after local opposition.

Renewable energy experienced a turning point in the 1970s with the 1973 oil crisis, miners' strike (1972), growing environmentalism and wind energy development in the United States exerting pressure on the government. In 1974, the Central Policy Review Staff made the recommendation that ‘the first stage of a full technical and economic appraisal of harnessing wave power for electricity generation should be put in hand at once.’ Wave power was seen to be the future of the nation's energy policy, and solar, wind, and tidal schemes were dismissed as 'impractical'. Nevertheless, an alternative energy research centre was opened in Harwell, although it was criticised for favouring nuclear power. By 1978, four wave energy generator prototypes had been designed which were later deemed to expensive. The Wave Energy Programme closed in the same year.

During this period, there was a large increase in installations of solar thermal collectors to provide hot water. In 1986, Southampton began pumping heat from the geothermal borehole through a district heating network. Over the years, several combined heat and power (CHP) engines and backup boilers for heating have been added, along with absorption chillers and backupvapour compression machines for cooling.

In 1987 a 3.7MW demonstration wind turbine on Orkney began supplying electricity to homes, the largest in Britain at the time. Privatisation of the energy sector in 1989 caused direct governmental research funding to cease. Two years later the UK's first onshore windfarm was opened in Delabole, Cornwall. The farm consists of 10 turbines and produces enough energy for 2,700 homes. This was followed by the UK's first offshore windfarm in North Hoyle, Wales.

The share of renewables in the country's electricity generation has risen from below 2% in 1990 to 14.9% in 2013, helped by subsidy and falling costs. Introduced on 1 April 2002, the Renewables Obligation requires all electricity suppliers who supply electricity to end consumers to supply a set portion of their electricity from eligible renewables sources; a proportion that will increase each year until 2015 from a 3% requirement in 2002-2003, via 10.4% in 2010-2012 up to 15.4% by 2015-2016. The UK Government announced in the 2006 Energy Review an additional target of 20% by 2020-21. For each eligible megawatt hour of renewable energy generated, a tradable certificate called a Renewables obligation certificate(ROC) is issued by OFGEM.

In 2007, the United Kingdom Government agreed to an overall European Union target of generating 20% of the European Union's energy supply from renewable sources by 2020. Each European Union member state was given its own allocated target; for the United Kingdom it is 15%. This was formalised in January 2009 with the passage of the EU Renewables Directive. As renewable heat and fuel production in the United Kingdom are at extremely low bases, RenewableUK estimates that this will require 35–40% of the United Kingdom's electricity to be generated from renewable sources by that date, to be met largely by 33–35 GW of installed wind capacity. The 2008 Climate Change Act consists of a commitment to reducing net Greenhouse Gas Emissions by 80% by 2050 (on 1990 levels) and an intermediate target reduction of 26% by 2020.

The Green Deal is UK government policy, launched by the Department of Energy and Climate Change on 1 October 2012. It permits loans for energy saving measures for properties in Great Britain to enable consumers to benefit from energy efficient improvements to their home.

The Thanet Wind Farm ( Thanet Offshore Wind Farm) Thanet district in Kent, England.

The Thanet Wind Farm (also sometimes called Thanet Offshore Wind Farm) is an offshore wind farm 7 miles (11 km) off the coast of Thanet district in Kent, England. It is the world's third largest offshore wind farm, as of June 2013, the largest being the London Array, followed by Walney Wind Farm. It has a nameplate capacity (maximum output) of 300 MW and it cost £780–900 million (US$1.2–1.4 billion). Thanet is one of fifteen Round 2 wind projects announced by the Crown Estate in January 2004 but the first to be developed. It was officially opened on 23 September 2010, when it overtook Horns Rev 2 as the biggest offshore wind farm in the world. It has since been overtaken by Walney.
The project covers an area of 13.5 square miles (35 km2), with 500 metres (1,600 ft) between turbines and 800 metres (2,600 ft) between the rows. Average water depth is 14–23 metres (46–75 ft). Planning permission for the project was granted on 18 December 2006.According to Thanet Offshore Wind Ltd, it was expected to be "the largest operational offshore wind farm in the World". The Thanet project has a total capacity of 300 MW which, by yearly average, is sufficient to supply approximately 240,000 homes.It has an estimated generation of 960 GW·h per year of electricity, which means a load factor of 36.5% and an average power density of 3.1 W/m².
In 2011, the yearly production achieved was 823.88 GW·h which means a load factor of 31.35%.

Two submarine power cables (by Italy-based Prysmian Group) run from an offshore substation within the wind farm connecting to an existing onshore substation in Richborough, Kent, connecting to a world-first two transformers.The offshore substation steps up the turbine voltage of 33 kV to 132 kV for the grid. Maintenance of the turbines is carried out by Vestas, while a separate maintenance agreement with SLP Energy covers the turbines foundations. Turbines are installed by the Danish offshore wind farm services provider A2SEA. The TIV MPI Resolution carried and installed the turbines.
The Thanet scheme is project financed. Thanet Offshore Wind Ltd (TOW), the project company was owned by hedge fund Christofferson, Robb & Co. It was purchased from a group of sponsors led by Warwick Energy Ltd. In August 2008 Christofferson, Robb & Co placed the project back on the market. On 10 November 2008, Vattenfall, a Swedish energy company, acquired TOW.
The development was due to be in place by 2008. Vestas were chosen as the preferred turbine supplier in July 2006, and SLP were chosen as preferred supplier for the foundations in September 2006. The project was delayed by a number of issues including problems with Vestas who temporarily withdrew their V90 offshore model from the market in 2007 following gearbox problems. The V90-3MW was re-released for sales starting from May 2008.

Vattenfall acquired the project in November 2008. On 28 June 2010, they reported that all turbines had been installed for commissioning due by the end of 2010.The wind farm was completed in September 2010

Offshore wind power or offshore wind energy-use of wind farms constructed offshore,

Offshore wind power or offshore wind energy is the use of wind farms constructed offshore, usually on the continental shelf, to harvest wind energy to generate electricity. Stronger wind speeds are available offshore compared to on land, so offshore wind power’s contribution in terms of electricity supplied is higher, and NIMBY opposition to construction is usually much weaker. However, offshore wind farms are relatively expensive. At the end of 2014, 3,230 turbines at 84 offshore wind farms across 11 European countries had been installed and grid-connected, making a total capacity of 11,027 MW.

As of 2010 Siemens and Vestas were turbine suppliers for 90% of offshore wind power, while Dong Energy, Vattenfall and E.on were the leading offshore operators.As of 1 January 2016, about 12 gigawatts (GW) of offshore wind power capacity was operational, mainly in Northern Europe, with 3,755 MW of that coming online during 2015.[4] According to BTM Consult, more than 16 GW of additional capacity will be installed before the end of 2014 and the United Kingdom and Germany will become the two leading markets. Offshore wind power capacity is expected to reach a total of 75 GW worldwide by 2020, with significant contributions from China and the United States.

As of 2013 the 630 megawatt (MW) London Array is the largest offshore wind farm in the world, with the 504 (MW) Greater Gabbard wind farm the second largest, followed by the 367 MW Walney Wind Farm. All are off the coast of the UK. These projects will be dwarfed by subsequent wind farms that are in the pipeline, including Dogger Bank at 4,800 MW, Norfolk Bank (7,200 MW), and Irish Sea (4,200 MW). At the end of June 2013 total European combined offshore wind energy capacity was 6,040 MW. UK installed 513.5 MW offshore windpower in the first half year of 2013.
Offshore wind power refers to the construction of wind farms in bodies of water to generate electricity from wind. Unlike the typical usage of the term "offshore" in the marine industry, offshore wind power includes inshore water areas such as lakes, fjords and sheltered coastal areas, utilizing traditional fixed-bottom wind turbine technologies, as well as deep-water areas utilizing floating wind turbines.
Europe is the world leader in offshore wind power, with the first offshore wind farm (Vindeby) being installed in Denmark in 1991. In 2013, offshore wind power contributed to 1,567 MW of the total 11,159 MW of wind power capacity constructed that year. By January 2014, 69 offshore wind farms had been constructed in Europe with an average annual rated capacity of 482 MW in 2013,and as of January 2014 the United Kingdom has by far the largest capacity of offshore wind farms with 3,681 MW. Denmark is second with 1,271 MW installed and Belgium is third with 571 MW. Germany comes fourth with 520 MW, followed by the Netherlands (247 MW), Sweden (212 MW), Finland (26 MW), Ireland (25 MW), Spain (5 MW), Norway (2 MW) and Portugal (2 MW).By January 2014, the total installed capacity of offshore wind farms in European waters had reached 6,562 MW.

As of January 2014, German wind turbine manufacturer Siemens Wind Power and Danish wind turbine manufacturer Vestas together have installed 80% of the world's 6.6 GW offshore wind power capacity; Senvion-REpower comes third with 8% and Bard (6%).

Projections for 2020 calculate a wind farm capacity of 40 GW in European waters, which would provide 4% of the European Union's demand of electricity.

The Chinese government has set ambitious targets of 5 GW of installed offshore wind capacity by 2015 and 30 GW by 2020 that would eclipse capacity in other countries. In May 2014 current capacity of offshore wind power in China was 565 MW.

India is looking at the potential of off-shore wind power plants, with a 100 MW demonstration plant being planned off the coast of Gujarat (2014). In 2013, a group of organizations, led by Global Wind Energy Council (GWEC) started project FOWIND (Facilitating Offshore Wind in India)to identify potential zones for development of off-shore wind power in India and to stimulate R & D activities in this area FOWIND. In 2014 FOWIND commissioned Center for Study of Science, Technology and Policy (CSTEP) to undertake pre-feasibility studies in eight zones in Tamil Nadu which have been identified as having potential.

Wind power- air flow- wind turbines- mechanically power generators for electricity

Wind power is the use of air flow through wind turbines to mechanically power generators for electricity. Wind power, as an alternative to burning fossil fuels, is plentiful, renewable, widely distributed, clean, produces no greenhouse gas emissions during operation, and uses little land.The net effects on the environment are far less problematic than those of nonrenewable power sources.

Wind farms consist of many individual wind turbines which are connected to the electric power transmission network. Onshore wind is an inexpensive source of electricity, competitive with or in many places cheaper than coal or gas plants. Offshore wind is steadier and stronger than on land, and offshore farms have less visual impact, but construction and maintenance costs are considerably higher. Small onshore wind farms can feed some energy into the grid or provide electricity to isolated off-grid locations.

Wind power gives variable power which is very consistent from year to year but which has significant variation over shorter time scales. It is therefore used in conjunction with other electric power sources to give a reliable supply. As the proportion of wind power in a region increases, a need to upgrade the grid, and a lowered ability to supplant conventional production can occur.Power management techniques such as having excess capacity, geographically distributed turbines, dispatchable backing sources, sufficient hydroelectric power, exporting and importing power to neighboring areas, using vehicle-to-grid strategies or reducing demand when wind production is low, can in many cases overcome these problems.In addition, weather forecasting permits the electricity network to be readied for the predictable variations in production that occur.

As of 2015, Denmark generates 40% of its electricity from wind, and at least 83 other countries around the world are using wind power to supply their electricity grids. In 2014 global wind power capacity expanded 16% to 369,553 MW.Yearly wind energy production is also growing rapidly and has reached around 4% of worldwide electricity usage,11.4% in the EU.
Wind power has been used as long as humans have put sails into the wind. For more than two millennia wind-powered machines have ground grain and pumped water. Wind power was widely available and not confined to the banks of fast-flowing streams, or later, requiring sources of fuel. Wind-powered pumps drained the polders of the Netherlands, and in arid regions such as the American mid-west or the Australian outback, wind pumps provided water for live stock and steam engines.

The first windmill used for the production of electricity was built in Scotland in July 1887 by Prof James Blyth of Anderson's College, Glasgow (the precursor of Strathclyde University). Blyth's 10 m high, cloth-sailed wind turbine was installed in the garden of his holiday cottage at Marykirk in Kincardineshire and was used to charge accumulators developed by the Frenchman Camille Alphonse Faure, to power the lighting in the cottage, thus making it the first house in the world to have its electricity supplied by wind power.Blyth offered the surplus electricity to the people of Marykirk for lighting the main street, however, they turned down the offer as they thought electricity was "the work of the devil." Although he later built a wind turbine to supply emergency power to the local Lunatic Asylum, Infirmary and Dispensary of Montrose the invention never really caught on as the technology was not considered to be economically viable.

Across the Atlantic, in Cleveland, Ohio a larger and heavily engineered machine was designed and constructed in the winter of 1887–1888 by Charles F. Brush,this was built by his engineering company at his home and operated from 1886 until 1900. The Brush wind turbine had a rotor 17 m (56 foot) in diameter and was mounted on an 18 m (60 foot) tower. Although large by today's standards, the machine was only rated at 12 kW. The connected dynamo was used either to charge a bank of batteries or to operate up to 100 incandescent light bulbs, three arc lamps, and various motors in Brush's laboratory.

With the development of electric power, wind power found new applications in lighting buildings remote from centrally-generated power. Throughout the 20th century parallel paths developed small wind stations suitable for farms or residences, and larger utility-scale wind generators that could be connected to electricity grids for remote use of power. Today wind powered generators operate in every size range between tiny stations for battery charging at isolated residences, up to near-gigawatt sized offshore wind farms that provide electricity to national electrical networks.

A minicomputer, or colloquially mini, - class of smaller computers HISTORY

A minicomputer, or colloquially mini, is a class of smaller computers that developed in the mid-1960s and sold for much less than mainframe and mid-size computers from IBM and its direct competitors. In a 1970 survey, the New York Times suggested a consensus definition of a minicomputer as a machine costing less than 25,000 USD, with an input-output device such as a teleprinter and at least four thousand words of memory, that is capable of running programs in a higher level language, such as Fortran or BASIC.The class formed a distinct group with its own software architectures and operating systems. Minis were designed for control, instrumentation, human interaction, and communication switching as distinct from calculation and record keeping. Many were sold indirectly to Original Equipment Manufacturers (OEMs) for final end use application. During the two decade lifetime of the minicomputer class (1965-1985), almost 100 companies formed and only a half dozen remained.

When single-chip CPUs appeared, beginning with the Intel 4004 in 1971, the term "minicomputer" came to mean a machine that lies in the middle range of the computing spectrum, in between the smallest mainframe computers and the microcomputers. The term "minicomputer" is little used today; the contemporary term for this class of system is "midrange computer", such as the higher-end SPARC, Power Architecture and Itanium-based systems from Oracle, IBM and Hewlett-Packard.
The term "minicomputer" developed in the 1960s to describe the smaller computers that became possible with the use of transistors and core memory technologies, minimal instructions sets and less expensive peripherals such as the ubiquitous Teletype Model 33 ASR. They usually took up one or a few 19-inch rack cabinets, compared with the large mainframes that could fill a room.

The definition of minicomputer is vague with the consequence that there are a number of candidates for the first minicomputer.An early and highly successful minicomputer was Digital Equipment Corporation's (DEC) 12-bit PDP-8, which was built using discrete transistors and cost from US$16,000 upwards when launched in 1964. Later versions of the PDP-8 took advantage of small-scale integrated circuits. The important precursors of the PDP-8 include the PDP-5, LINC, the TX-0, the TX-2, and the PDP-1. DEC gave rise to a number of minicomputer companies along Massachusetts Route 128, including Data General, Wang Laboratories, Apollo Computer, and Prime Computer.

Minicomputers were also known as midrange computers. They grew to have relatively high processing power and capacity. They were used in manufacturing process control, telephone switching and to control laboratory equipment. In the 1970s, they were the hardware that was used to launch the computer-aided design (CAD) industry and other similar industries where a smaller dedicated system was needed.

The 7400 series of TTL integrated circuits started appearing in minicomputers in the late 1960s. The 74181 arithmetic logic unit (ALU) was commonly used in the CPU data paths. Each 74181 had a bus width of four bits, hence the popularity of bit-slice architecture. The 7400 series offered data-selectors, multiplexers, three-state buffers, memories, etc. in dual in-line packages with one-tenth inch spacing, making major system components and architecture evident to the naked eye. Starting in the 1980s, many minicomputers used VLSI circuits.

At the launch of the MITS Altair 8800 in 1975, Radio Electronics magazine referred to the system as a "minicomputer", although the term microcomputer soon became usual for personal computers based on single-chip microprocessors. At the time, microcomputers were 8-bit single-user, relatively simple machines running simple program-launcher operating systems like CP/M or MS-DOS, while minis were much more powerful systems that ran full multi-user, multitasking operating systems, such as VMS and Unix, and although the classical mini was a 16-bit computer, the emerging higher performance superminis were 32-bit.
The decline of the minis happened due to the lower cost of microprocessor-based hardware, the emergence of inexpensive and easily deployable local area network systems, the emergence of the 68020, 80286 and the 80386 microprocessors, and the desire of end-users to be less reliant on inflexible minicomputer manufacturers and IT departments or "data centers". The result was that minicomputers and computer terminals were replaced by networked workstations, file servers and PCs in some installations, beginning in the latter half of the 1980s.

During the 1990s, the change from minicomputers to inexpensive PC networks was cemented by the development of several versions of Unix and Unix-like systems that ran on the Intel x86 microprocessor architecture, including Solaris, Linux, FreeBSD, NetBSD and OpenBSD. Also, the Microsoft Windows series of operating systems, beginning with Windows NT, now included server versions that supported preemptive multitasking and other features required for servers.

As microprocessors have become more powerful, the CPUs built up from multiple components – once the distinguishing feature differentiating mainframes and midrange systems from microcomputers – have become increasingly obsolete, even in the largest mainframe computers.

Digital Equipment Corporation (DEC) was once the leading minicomputer manufacturer, at one time the second-largest computer company after IBM. But as the minicomputer declined in the face of generic Unix servers and Intel-based PCs, not only DEC, but almost every other minicomputer company including Data General, Prime, Computervision, Honeywell and Wang Laboratories, many based in New England (hence the end of the Massachusetts Miracle), also collapsed or merged. DEC was sold to Compaq in 1998, while Data General was acquired by EMC Corporation.

Today only a few proprietary minicomputer architectures survive. The IBM System/38 operating system, which introduced many advanced concepts, lives on with IBM's AS/400. Realising the importance of the myriad lines of 'legacy code' (programs) written, 'AS' stands for 'Application System'. Great efforts were made by IBM to enable programs originally written for the System/34 and System/36 to be moved to the AS/400. The AS/400 was replaced by the iSeries, which was subsequently replaced by the System i. In 2008, the System i was replaced by the IBM Power Systems. By contrast, competing proprietary computing architectures from the early 1980s, such as DEC's VAX, Wang VS and Hewlett Packard's HP3000 have long been discontinued without a compatible upgrade path. OpenVMS runs HP Alpha and Intel IA64 (Itanium) CPU architectures.

Tandem Computers, which specialized in reliable large-scale computing, was acquired by Compaq, and a few years afterward the combined entity merged with Hewlett Packard. The NSK-based NonStop product line was re-ported from MIPS processors to Itanium-based processors branded as 'HP Integrity NonStop Servers'. As in the earlier migration from stack machines to MIPS microprocessors, all customer software was carried forward without source changes. Integrity NonStop continues to be HP's answer for the extreme scaling needs of its very largest customers. The NSK operating system, now termed NonStop OS, continues as the base software environment for the NonStop Servers, and has been extended to include support for Java and integration with popular development tools like Visual Studio and Eclipse.

Digital Equipment Corporation, - DEC - history

Digital Equipment Corporation, also known as DEC and using the trademark Digital, was a major American company in the computer industry from the 1960s to the 1990s. It was a leading vendor of computer systems, including computers, software, and peripherals, and its PDP and successor VAX products were the most successful of all minicomputers in terms of sales.

From 1957 until 1992 its headquarters were located in a former wool mill in Maynard, Massachusetts (since renamed Clock Tower Place and now home to multiple companies). DEC was acquired in June 1998 by Compaq, which subsequently merged with Hewlett-Packard in May 2002. Some parts of DEC, notably the compiler business and the Hudson, Massachusetts facility, were sold to Intel.

Digital Equipment Corporation should not be confused with the unrelated companies Digital Research, Inc or Western Digital, although the latter once manufactured the LSI-11 chipsets used in DEC's low end PDP-11/03 computers.
Initially focusing on the small end of the computer market allowed DEC to grow without its potential competitors making serious efforts to compete with them. Their PDP series of machines became popular in the 1960s, especially the PDP-8, widely considered to be the first successful minicomputer. Looking to simplify and update their line, DEC replaced most of their smaller machines with the PDP-11 in 1970, eventually selling over 600,000 units and cementing DECs position in the industry. Originally designed as a follow-on to the PDP-11, DEC's VAX-11 series was the first widely used 32-bit minicomputer, sometimes referred to as "superminis". These were able to compete in many roles with larger mainframe computers, such as the IBM System/370. The VAX was a best-seller, with over 400,000 sold, and its sales through the 1980s propelled the company into the second largest in the industry. At its peak, DEC was the second largest employer in Massachusetts, second only to the state government.

The rapid rise of the business microcomputer in the late 1980s, and especially the introduction of powerful 32-bit systems in the 1990s, quickly eroded the value of DEC's systems. DEC's last major attempt to find a space in the rapidly changing market was the DEC Alpha 64-bit RISC processor architecture. DEC initially started work on Alpha as a way to re-implement their VAX series, but also employed it in a range of high-performance workstations. Although the Alpha processor family met both of these goals, and, for most of its lifetime, was the fastest processor family on the market, extremely high asking prices were outsold by lower priced x86 chips from Intel and clones such as AMD.

The company was acquired in June 1998 by Compaq, in what was at that time the largest merger in the history of the computer industry. At the time, Compaq was focused on the enterprise market and had recently purchased several other large vendors. DEC was a major player overseas where Compaq had less presence. However, Compaq had little idea what to do with its acquisitions, and soon found itself in financial difficulty of its own. The company subsequently merged with Hewlett-Packard in May 2002. As of 2007 some of DEC's product lines were still produced under the HP name.
Ken Olsen and Harlan Anderson were two engineers who had been working at MIT Lincoln Laboratory on the lab's various computer projects. The Lab is best known for their work on what would today be known as "interactivity", and their machines were among the first where operators had direct control over programs running in real time. These had started in 1944 with the famed Whirlwind, which was originally developed to make a flight simulator for the US Navy, although this was never completed. Instead, this effort evolved into the SAGE system for the US Air Force, which used large screens and light guns to allow operators to interact with radar data stored in the computer.

When the Air Force project wound down, the Lab turned their attention to an effort to build a version of the Whirlwind using transistors in place of vacuum tubes. In order to test their new circuitry, they first built a small 18-bit machine known as TX-0, which first ran in 1956.[5] When the TX-0 successfully proved the basic concepts, attention turned to a much larger system, the 36-bit TX-2 with a then-enormous 64 kWords of core memory. Core was so expensive that parts of TX-0's memory were stripped for the TX-2, and what remained of the TX-0 was then given to MIT on permanent loan.

At MIT, Olsen and Anderson noticed something odd: students would line up for hours to get a turn to use the stripped-down TX-0, while largely ignoring a faster IBM machine that was also available. The two decided that the draw of interactive computing was so strong that they felt there was a market for a small machine dedicated to this role, essentially a commercialized TX-0. They could sell this to users where graphical output or realtime operation would be more important than outright performance. Additionally, as the machine would cost much less than the larger systems then available, it would also be able to serve users that needed a lower-cost solution dedicated to a specific task, where a larger 36-bit machine would not be needed.

In 1957 when the pair and Ken's brother Stan went looking for capital, they found that the American business community was hostile to investing in computer companies. Many smaller computer companies had come and gone in the 1950s, wiped out when new technical developments rendered their platforms obsolete, and even large companies like RCA and General Electric were failing to make a profit in the market. The only serious expression of interest came from Georges Doriot and his American Research and Development Corporation (AR&D). Worried that a new computer company would find it difficult to arrange further financing, Doriot suggested the fledgling company change its business plan to focus less on computers, and even change their name from "Digital Computer Corporation".

The pair returned with an updated business plan that outlined two phases for the company's development. They would start by selling computer modules as stand-alone devices that could be purchased separately and wired together to produce a number of different digital systems for lab use. Then, if these "digital modules" were able to build a self-sustaining business, the company would be free to use them to develop a complete computer in their Phase II. The newly christened "Digital Equipment Corporation" received $70,000 from AR&D for a 70% share of the company, and began operations in a Civil War era textile mill in Maynard, Massachusetts, where plenty of inexpensive manufacturing space was available.

A central processing unit (CPU HISTORY

A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. The term has been used in the computer industry at least since the early 1960s.Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.

The form, design and implementation of CPUs have changed over the course of their history, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and "executes" them by directing the coordinated operations of the ALU, registers and other components.

Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit (IC) chip. An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip (SoC). Some computers employ a multi-core processor, which is a single chip containing two or more CPUs called "cores"; in that context, single chips are sometimes referred to as "sockets". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central.
Computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers".[4] Since the term "CPU" is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.

The idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so that it could be finished sooner.[5] On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949.[6] EDVAC was designed to perform a certain number of instructions (or operations) of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer.This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed simply by changing the contents of the memory. EDVAC, however, was not the first stored-program computer; the Manchester Small-Scale Experimental Machine, a small prototype stored-program computer, ran its first program on 21 June 1948 and the Manchester Mark 1 ran its first program during the night of 16–17 June 1949.

Early CPUs were custom designs used as part of a larger and sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers.Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, and sometimes even in toys.

While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, and the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.Most modern CPUs are primarily von Neumann in design, but CPUs with the Harvard architecture are seen as well, especially in embedded applications; for instance, the Atmel AVR microcontrollers are Harvard architecture processors.

Relays and vacuum tubes (thermionic tubes) were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.

Extreme programming (XP)-software development methodology

Extreme programming (XP) is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development, it advocates frequent "releases" in short development cycles, which is intended to improve productivity and introduce checkpoints at which new customer requirements can be adopted.

Other elements of extreme programming include: programming in pairs or doing extensive code review, unit testing of all code, avoiding programming of features until they are actually needed, a flat management structure, simplicity and clarity in code, expecting changes in the customer's requirements as time passes and the problem is better understood, and frequent communication with the customer and among programmers. The methodology takes its name from the idea that the beneficial elements of traditional software engineering practices are taken to "extreme" levels. As an example, code reviews are considered a beneficial practice; taken to the extreme, code can be reviewed continuously, i.e. the practice of pair programming.
Extreme Programming was created by Kent Beck during his work on the Chrysler Comprehensive Compensation System (C3) payroll project.Beck became the C3 project leader in March 1996 and began to refine the development methodology used in the project and wrote a book on the methodology (in October 1999, Extreme Programming Explained was published).Chrysler cancelled the C3 project in February 2000, after seven years, when the company was acquired by Daimler-Benz.

Many of extreme programming practices have been around for some time; the methodology takes "best practices" to extreme levels. For example, the "practice of test-first development, planning and writing tests before each micro-increment" was used as early as NASA's Project Mercury, in the early 1960s (Larman 2003). To shorten the total development time, some formal test documents (such as for acceptance testing) have been developed in parallel (or shortly before) the software is ready for testing. A NASA independent test group can write the test procedures, based on formal requirements and logical limits, before the software has been written and integrated with the hardware. In XP, this concept is taken to the extreme level by writing automated tests (perhaps inside of software modules) which validate the operation of even small sections of software coding, rather than only testing the larger features.
Software development in the 1990s was shaped by two major influences: internally, object-oriented programming replaced procedural programming as the programming paradigm favored by some in the industry; externally, the rise of the Internet and the dot-com boom emphasized speed-to-market and company growth as competitive business factors. Rapidly changing requirements demanded shorter product life-cycles, and were often incompatible with traditional methods of software development.

The Chrysler Comprehensive Compensation System (C3) was started in order to determine the best way to use object technologies, using the payroll systems at Chrysler as the object of research, with Smalltalk as the language and GemStone as the data access layer. They brought in Kent Beck,a prominent Smalltalk practitioner, to do performance tuning on the system, but his role expanded as he noted several problems they were having with their development process. He took this opportunity to propose and implement some changes in their practices based on his work with his frequent collaborator, Ward Cunningham. Beck describes the early conception of the methods:

Beck invited Ron Jeffries to the project to help develop and refine these methods. Jeffries thereafter acted as a coach to instill the practices as habits in the C3 team.

Information about the principles and practices behind XP was disseminated to the wider world through discussions on the original wiki, Cunningham's WikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (see agile software development). Also, XP concepts have been explained, for several years, using a hypertext system map on the XP website at "http://www.extremeprogramming.org" circa 1999.

Beck edited a series of books on XP, beginning with his own Extreme Programming Explained (1999, ISBN 0-201-61641-6), spreading his ideas to a much larger audience. Authors in the series went through various aspects attending XP and its practices. The series included a book that was critical of the practices.

Software engineering History

Software engineering is the application of engineering to the design, development, implementation and maintenance of software in a systematic method.
Typical formal definitions of software engineering are:"research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications."
"the systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software";
"the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software";"an engineering discipline that is concerned with all aspects of software production";and "the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines.

When the first digital computers appeared in the early 1940s, the instructions to make them operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing.

Programming languages started to appear in the 1950s and this was also another major step in abstraction. Major languages such as Fortran, ALGOL, and COBOL were released in the late 1950s to deal with scientific, algorithmic, and business problems respectively. Edsger W. Dijkstra wrote his seminal paper, "Go To Statement Considered Harmful",[11] in 1968 and David Parnas introduced the key concept of modularity and information hiding in 1972 to help programmers deal with the ever increasing complexity of software systems.

The term "software engineering", coined first by Anthony Oettinger and then used by Margaret Hamilton,[13][14] was used in 1968 as a title for the world's first conference on software engineering, sponsored and facilitated by NATO. The conference was attended by international experts on software who agreed on defining best practices for software grounded in the application of engineering. The result of the conference is a report that defines how software should be developed. The original report is publicly available.

The discipline of software engineering was created to address poor quality of software, get projects exceeding time and budget under control, and ensure that software is built systematically, rigorously, measurably, on time, on budget, and within specification.Engineering already addresses all these issues, hence the same principles used in engineering can be applied to software. The widespread lack of best practices for software at the time was perceived as a "software crisis".

Barry W. Boehm documented several key advances to the field in his 1981 book, 'Software Engineering Economics'.These include his Constructive Cost Model (COCOMO), which relates software development effort for a program, in man-years T, to source lines of code (SLOC).  T = k * (SLOC)^{(1+x)} The book analyzes sixty-three software projects and concludes the cost of fixing errors escalates as the project moves toward field use. The book also asserts that the key driver of software cost is the capability of the software development team.

In 1984, the Software Engineering Institute (SEI) was established as a federally funded research and development center headquartered on the campus of Carnegie Mellon University in Pittsburgh, Pennsylvania, United States. Watts Humphrey founded the SEI Software Process Program, aimed at understanding and managing the software engineering process. His 1989 book, Managing the Software Process,asserts that the Software Development Process can and should be controlled, measured, and improved. The Process Maturity Levels introduced would become the Capability Maturity Model Integration for Development(CMMi-DEV), which has defined how the US Government evaluates the abilities of a software development team.

Modern, generally accepted best-practices for software engineering have been collected by the ISO/IEC JTC 1/SC 7 subcommittee and published as the Software Engineering Body of Knowledge (SWEBOK).

computer - a set of arithmetic or logical operations automatically

A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.

Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU), and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.

Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).

Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are generally considered as "computers". However, the embedded computers found in many devices from MP3 players to fighter aircraft and from electronic toys to industrial robots are the most numerous.
Computer programming (often shortened to programming) is a process that leads from an original formulation of a computing problem to executable computer programs. Programming involves activities such as analysis, developing understanding, generating algorithms, verification of requirements of algorithms including their correctness and resources consumption, and implementation (commonly referred to as coding) of algorithms in a target programming language. Source code is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem. The process of programming thus often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.

Related tasks include testing, debugging, and maintaining the source code, implementation of the build system, and management of derived artifacts such as machine code of computer programs. These might be considered part of the programming process, but often the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of source code. Software engineering combines engineering techniques with software development practices.

Thursday 24 March 2016

The Local Area Augmentation System

The Local Area Augmentation System (LAAS) is an all-weather aircraft landing system based on real-time differential correction of the GPS signal. Local reference receivers located around the airport send data to a central location at the airport. This data is used to formulate a correction message, which is then transmitted to users via a VHF Data Link. A receiver on an aircraft uses this information to correct GPS signals, which then provides a standard ILS-style display to use while flying a precision approach. The International Civil Aviation Organization (ICAO) calls this type of system a Ground Based Augmentation System (GBAS).

The Indian Regional Navigation Satellite System

The Indian Regional Navigation Satellite System or IRNSS is an indigenously developed Navigation Satellite System that is used to provide accurate real-time positioning and timing services over India and region extending to 1500 km around India. The fully deployed IRNSS system consists of 3 satellites in GEO orbit and 4 satellites in GSO orbit, approximately 36,000 km altitude above earth surface. However, the full system comprises nine satellites, including two on the ground as stand-by. The requirement of such a navigation system is driven because access to foreign government-controlled global navigation satellite systems is not guaranteed in hostile situations, as happened to the Indian military depending on American GPS during the Kargil War.The IRNSS would provide two services, with the Standard Positioning Service open for civilian use, and the Restricted Service (an encrypted one) for authorized users (including the military).

IRNSS would have seven satellites, out of which six are already placed in orbit. The constellation of seven satellites is expected to operate from June 2016 onwards.

A satellite navigation

A satellite navigation or satnav system is a system of satellites that provide autonomous geo-spatial positioning with global coverage. It allows small electronic receivers to determine their location (longitude, latitude, and altitude/elevation) to high precision (within a few metres) using time signals transmitted along a line of sight by radio from satellites. The signals also allow the electronic receivers to calculate the current local time to high precision, which allows time synchronisation. A satellite navigation system with global coverage may be termed a global navigation satellite system (GNSS).

As of April 2013, only the United States NAVSTAR Global Positioning System (GPS) and the Russian GLONASS are global operational GNSSs. China is in the process of expanding its regional BeiDou Navigation Satellite System into the global Compass navigation system by 2020. The European Union's Galileo is a GNSS in initial deployment phase, scheduled to be fully operational by 2020 at the earliest., India has a regional satellite-based augmentation system, GPS Aided GEO Augmented Navigation (GAGAN), which enhances the accuracy of NAVSTAR GPS and GLONASS positions, and is developing the Indian Regional Navigation Satellite System (IRNSS). France and Japan are in the process of developing regional navigation systems.

Global coverage for each system is generally achieved by a satellite constellation of 20–30 medium Earth orbit (MEO) satellites spread between several orbital planes. The actual systems vary, but use orbital inclinations of >50° and orbital periods of roughly twelve hours (at an altitude of about 20,000 kilometres or 12,000 miles).

artificial satellite

In the context of spaceflight, a satellite is an artificial object which has been intentionally placed into orbit. Such objects are sometimes called artificial satellites to distinguish them from natural satellites such as Earth's Moon.

The world's first artificial satellite, the Sputnik 1, was launched by the Soviet Union in 1957. Since then, thousands of satellites have been launched into orbit around the Earth. Some satellites, notably space stations, have been launched in parts and assembled in orbit. Artificial satellites originate from more than 40 countries and have used the satellite launching capabilities of ten nations. About a thousand satellites are currently operational, whereas thousands of unused satellites and satellite fragments orbit the Earth as space debris. A few space probes have been placed into orbit around other bodies and become artificial satellites to the Moon, Mercury, Venus, Mars, Jupiter, Saturn, Vesta, Eros, Ceres, and the Sun.

Satellites are used for a large number of purposes. Common types include military and civilian Earth observation satellites, communications satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites. Satellite orbits vary greatly, depending on the purpose of the satellite, and are classified in a number of ways. Well-known (overlapping) classes include low Earth orbit, polar orbit, and geostationary orbit.

About 6,600 satellites have been launched. The latest estimates are that 3,600 remain in orbit. Of those, about 1,000 are operational; the rest have lived out their useful lives and are part of the space debris. Approximately 500 operational satellites are in low-Earth orbit, 50 are in medium-Earth orbit (at 20,000 km), the rest are in geostationary orbit (at 36,000 km).

Satellites are propelled by rockets to their orbits. Usually the launch vehicle itself is a rocket lifting off from a launch pad on land. In a minority of cases satellites are launched at sea (from a submarine or a mobile maritime platform) or aboard a plane (see air launch to orbit).

Satellites are usually semi-independent computer-controlled systems. Satellite subsystems attend many tasks, such as power generation, thermal control, telemetry, attitude control and orbit control.

The International Space Station (ISS)

The International Space Station (ISS) is a space station, or a habitable artificial satellite, in low Earth orbit. Its first component launched into orbit in 1998, and the ISS is now the largest artificial body in orbit and can often be seen with the naked eye from Earth. The ISS consists of pressurised modules, external trusses, solar arrays, and other components. ISS components have been launched by Russian Proton and Soyuz rockets as well as American Space Shuttles.

The ISS serves as a microgravity and space environment research laboratory in which crew members conduct experiments in biology, human biology, physics, astronomy, meteorology, and other fields. The station is suited for the testing of spacecraft systems and equipment required for missions to the Moon and Mars. The ISS maintains an orbit with an altitude of between 330 and 435 km (205 and 270 mi) by means of reboost manoeuvres using the engines of the Zvezda module or visiting spacecraft. It completes 15.54 orbits per day.

ISS is the ninth space station to be inhabited by crews, following the Soviet and later Russian Salyut, Almaz, and Mir stations as well as Skylab from the US. The station has been continuously occupied for 15 years and 142 days since the arrival of Expedition 1 on 2 November 2000. This is the longest continuous human presence in space, having surpassed the previous record of 9 years and 357 days held by Mir. The station is serviced by a variety of visiting spacecraft: Soyuz, Progress, the Automated Transfer Vehicle, the H-II Transfer Vehicle, Dragon, and Cygnus. It has been visited by astronauts, cosmonauts and space tourists from 17 different nations.

After the US Space Shuttle programme ended in 2011, Soyuz rockets became the only provider of transport for astronauts at the International Space Station, and Dragon became the only provider of bulk cargo-return-to-Earth services (downmass capability of Soyuz capsules is very limited).

The ISS programme is a joint project among five participating space agencies: NASA, Roscosmos, JAXA, ESA, and CSA.The ownership and use of the space station is established by intergovernmental treaties and agreements. The station is divided into two sections, the Russian Orbital Segment (ROS) and the United States Orbital Segment (USOS), which is shared by many nations. As of January 2014, the American portion of ISS was funded until 2024.Roscosmos has endorsed the continued operation of ISS through 2024,but has proposed using elements of the Russian Orbital Segment to construct a new Russian space station called OPSEK.

On 28 March 2015, Russian sources announced that Roscosmos and NASA had agreed to collaborate on the development of a replacement for the current ISS.NASA later issued a guarded statement expressing thanks for Russia's interest in future cooperation in space exploration, but fell short of confirming the Russian announcement.

The Space Shuttle

The Space Shuttle was a partially reusable low Earth orbital spacecraft system operated by the U.S. National Aeronautics and Space Administration (NASA), as part of the Space Shuttle program. Its official program name was Space Transportation System (STS), taken from a 1969 plan for a system of reusable spacecraft of which it was the only item funded for development. The first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982. They were used on a total of 135 missions from 1981 to 2011, launched from the Kennedy Space Center (KSC) in Florida. Operational missions launched numerous satellites, interplanetary probes, and the Hubble Space Telescope (HST); conducted science experiments in orbit; and participated in construction and servicing of the International Space Station. The Shuttle fleet's total mission time was 1322 days, 19 hours, 21 minutes and 23 seconds.

Shuttle components included the Orbiter Vehicle (OV), a pair of recoverable solid rocket boosters (SRBs), and the expendable external tank (ET) containing liquid hydrogen and liquid oxygen. The Shuttle was launched vertically, like a conventional rocket, with the two SRBs operating in parallel with the OV's three main engines, which were fueled from the ET. The SRBs were jettisoned before the vehicle reached orbit, and the ET was jettisoned just before orbit insertion, which used the orbiter's two Orbital Maneuvering System (OMS) engines. At the conclusion of the mission, the orbiter fired its OMS to de-orbit and re-enter the atmosphere. The orbiter then glided as a spaceplane to a runway landing, usually at the Shuttle Landing Facility of KSC or Rogers Dry Lake in Edwards Air Force Base, California. After landing at Edwards, the orbiter was flown back to the KSC on the Shuttle Carrier Aircraft, a specially modified Boeing 747.

The first orbiter, Enterprise, was built for Approach and Landing Tests and had no orbital capability. Four fully operational orbiters were initially built: Columbia, Challenger, Discovery, and Atlantis. Of these, two were lost in mission accidents: Challenger in 1986 and Columbia in 2003, with a total of fourteen astronauts killed. A fifth operational orbiter, Endeavour, was built in 1991 to replace Challenger. The Space Shuttle was retired from service upon the conclusion of Atlantis's final flight on July 21, 2011.

Abietic acid

Abietic acid (also known as abietinic acid or sylvic acid) is an organic compound that occurs widely in trees. It is the primary component of resin acid, is the primary irritant in pine wood and resin, isolated from rosin (via isomerization) and is the most abundant of several closely related organic acids that constitute most of rosin, the solid portion of the oleoresin of coniferous trees. Its ester or salt is called an abietate

The history of chemistry

The history of chemistry represents a time span from ancient history to the present. By 1000 BC, civilizations used technologies that would eventually form the basis to the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze.

The protoscience of chemistry, alchemy, was unsuccessful in explaining the nature of matter and its transformations. However, by performing experiments and recording the results, alchemists set the stage for modern chemistry. The distinction began to emerge when a clear differentiation was made between chemistry and alchemy by Robert Boyle in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, chemists are seen as applying scientific method to their work.

Chemistry is considered to have become an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs.

The timeline of chemistry

The timeline of chemistry lists important works, discoveries, ideas, inventions, and experiments that significantly changed humanity's understanding of the modern science known as chemistry, defined as the scientific study of the composition of matter and of its interactions. The history of chemistry in its modern form arguably began with the Irish scientist Robert Boyle, though its roots can be traced back to the earliest recorded history.

Early ideas that later became incorporated into the modern science of chemistry come from two main sources. Natural philosophers (such as Aristotle and Democritus) used deductive reasoning in an attempt to explain the behavior of the world around them. Alchemists (such as Geber and Rhazes) were people who used experimental techniques in an attempt to extend the life or perform material conversions, such as turning base metals into gold.

In the 17th century, a synthesis of the ideas of these two disciplines, that is the deductive and the experimental, leads to the development of a process of thinking known as the scientific method. With the introduction of the scientific method, the modern science of chemistry was born.

Known as "the central science", the study of chemistry is strongly influenced by, and exerts a strong influence on, many other scientific and technological fields. Many events considered central to our modern understanding of chemistry are also considered key discoveries in such fields as physics, biology, astronomy, geology, and materials science to name a few.

A chemist

A chemist is a scientist trained in the study of chemistry. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, reaction rates, and other chemical properties. The word 'chemist' is also used to address Pharmacists in Commonwealth English.

Chemists use this knowledge to learn the composition, and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products.

Semiconductors

Semiconductors are crystalline or amorphous solids with distinct electrical characteristics.They are of high resistance - higher than typical resistance materials, but still of much lower resistance than insulators. Their resistance decreases as their temperature increases, which is behavior opposite to that of a metal. Finally, their conducting properties may be altered in useful ways by the deliberate introduction ("doping") of impurities into the crystal structure, which lowers its resistance but also permits the creation of semiconductor junctions between differently-doped regions of the crystal. The behavior of charge carriers at these junctions is the basis of diodes, transistors and all modern electronics.

Semiconductor devices can display a range of useful properties such as passing current more easily in one direction than the other, showing variable resistance, and sensitivity to light or heat. Because the electrical properties of a semiconductor material can be modified by controlled addition of impurities, or by the application of electrical fields or light, devices made from semiconductors can be used for amplification, switching, and energy conversion.

The modern understanding of the properties of a semiconductor relies on quantum physics to explain the movement of electrons and holes (collectively known as "charge carriers") in a crystal lattice. Doping greatly increases the number of charge carriers within the crystal. When a doped semiconductor contains mostly free holes it is called "p-type", and when it contains mostly free electrons it is known as "n-type". The semiconductor materials used in electronic devices are doped under precise conditions to control the concentration and regions of p- and n-type dopants. A single semiconductor crystal can have many p- and n-type regions; the p–n junctions between these regions are responsible for the useful electronic behavior.

Although some pure elements and many compounds display semiconductor properties, silicon, germanium, and compounds of gallium are the most widely used in electronic devices. Elements near the so-called "metalloid staircase", where the metalloids are located on the periodic table, are usually used as semiconductors.

Some of the properties of semiconductor materials were observed throughout the mid 19th and first decades of the 20th century. The first practical application of semiconductors in electronics was the 1904 development of the Cat's-whisker detector, a primitive semiconductor diode widely used in early radio receivers. Developments in quantum physics in turn allowed the development of the transistor in 1947 and the integrated circuit in 1958.

Nanotechnology

Nanotechnology ("nanotech") is manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Until 2012, through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars, the European Union has invested 1.2 billion and Japan 750 million dollars.

Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, etc. The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.

Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in nanomedicine, nanoelectronics, biomaterials energy production, and consumer products. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

Lamprophyres

Lamprophyres are uncommon, small volume ultrapotassic igneous rocks primarily occurring as dikes, lopoliths, laccoliths, stocks and small intrusions. They are alkaline silica-undersaturated mafic or ultramafic rocks with high magnesium oxide, >3% potassium oxide, high sodium oxide and high nickel and chromium.

Lamprophyres occur throughout all geologic eras. Archaean examples are commonly associated with lode gold deposits. Cenozoic examples include magnesian rocks in Mexico and South America, and young ultramafic lamprophyres from Gympie in Australia with 18.5% MgO at ~250 Ma.