Saturday, 12 August 2017

Hybrid Solar and Wind Energy Harvesting

This is an old project I had started about a year ago that has since been discontinued. However I am more than happy to share information to the public. The circuit schematics are all available for download from the OSHPark site

https://oshpark.com/profiles/MuonRay

Energy harvesting is a concept that essentially promises to have vast arrays of decentralized energy harvesting stations that can run everywhere and anywhere, operate independently and perhaps run devices such as Internet of things (IOT)  sensors that send data they collect to the internet in which it is in a centralized location for analysis.

The stations themselves are of largely 2 regimes - mobile and immobile. 

Mobile sensor stations would include as examples:
  • Wearable sensors (i.e. clothing, watches, belts, rings, bracelets, etc) 
  • Sensors for vehicles (i.e. tags or attachable modules for cars, motorcycles, bicycles, planes, drones, seafaring vehicles, etc)
Immobile sensor stations would include as examples:
  • Sensors for Data Reading Stations ( i.e. outdoor weather stations, climate control for buildings, etc)
  • Sensors for Appliances (washing machines, refrigerators, heating systems, etc)


In both regimes it is important to consider how to increase the autonomy of the sensor stations themselves. The field of ambient energy harvesting boasts to harvest available energy sources from the external environment and power the sensor stations themselves. In order for this to be effective, the energy harvesting must be multifaceted in order to combat the differences between different times of day and different times of year in order to work almost indefinitely. 

Multifaceted energy harvesting would work on the same basis of examining the most efficient forms of sustainable energy in the environment which make up the least amount of space and which are the least intermittent (i.e. the least amount of interruption expected in the overall energy supply).


Discussion of Solar and Wind as Energy Sources

Solar:


By far, solar and wind make up the greatest energy source in an environment and require the least space. Moreover although both can be intermittent energy sources, both systems when hybridized can allow for a reasonable amount of compensation for the relative losses experienced in sun and wind as the conditions during the day and seasons change.

Powering an IOT module from a solar charged battery source is simple in terms of installation and the main obstacle is to make sure the solar panel and energy harvesting circuitry harvest energy even in low light levels. This can be accomplished with DC-DC upconverters and/or high quality, thin film solar panels.

The latest design for my energy harvesters uses the Linear Technology's LTC3105 for energy sources such as solar energy harvesting in low light levels. A PCB board was developed using OSHPark's online ordering service and assembled in my electronics lab.




The energy harvester is tested using a thin-film, flexible silicon solar cell, made by PowerFilm Inc., held in a picture frame.



When relying on solar energy as the power source, for northern and southern latitudes the seasonal dependence on sunlight is an obvious limiting factor on energy supply. The only way to combat this is by placing the solar panels as high as possible above the ground surface to avoid obstacles blocking the light.

The added benefit of power generation by solar energy lies with efficient tracking systems along with simply examining the relative light levels in an area where a solar powered device will be planted.



Wind:


In order to harvest wind energy we require mechanical knowledge as well as electrical. There are many different wind turbine designs, however to be efficient the wind turbine must work in the direction the wind is blowing.

To do this it must either be designed to steer itself via, for example using a fin on the back of the nacelle, or be a vertical axis wind turbine (VAWT) design that can move in any direction the wind blows. The VAWT design is better for small to medium wind turbine designs for use in energy harvesting.



Here is a video showing how the VAWT was tested using a different energy harvesting circuit that boosts an input DC voltage of 0.8V-5.0V to approximately 5.0V.



By placing the VAWT inside a commercial aluminium flag pole mount, we can in principle plant the wind turbine on almost any surface and point it in the direction of the prevailing wind to harvest power.



Wind energy collection is more subtle than solar in terms of the examination of its dependencies.  The amount of wind available to harvest as energy source at a given time is itself a consequence of the solar energy available in ways that are not so direct as PV solar energy harvesting.

Wind speeds at or near the surface generally decrease after sunset because at night the surface of the Earth cools much more rapidly than does the air above the surface.

As a result of this difference in cooling ability, it doesn’t take long for the ground to become colder than the air above it.

The air in close contact with the ground — say in the lowest 300 feet of the atmosphere — then becomes colder than the air above it.

This circumstance leads to the development of what is known as a temperature inversion. Inversions dramatically reduce the amount of mixing that occurs between different vertical layers of the atmosphere. As a consequence, once the inversion sets up (after sunset), it is much harder for fast-moving air above the ground to mix down to the surface, where it could appear as a gust of wind.



This is why fogs appear typically after sunset or before sunrise. The inversion prevents mixing that would disrupt the fog.


On a clear day when the sun is shining and is heating up the ground much more quickly than the air above it then air near the ground is heated more than the air above say 300 feet of atmosphere and it then becomes very easy for currents to form, as hot air rises and pushed cold air down, which in effect causes the air to mix and can form a cycle of surface gusts. This can be then harnessed by a surface windmill.

However, at sunset the ground will begin to cool rapidly and a temperature inversion will occur as the air at the surface cools faster than the more insulated air in the upper atmosphere. The winds will then be low or non-existent at night after a clear sunny day, in summer for example.

The ground always cools faster than the air but if the temperature inversion is negated somehow the winds will blow day or night.

In some cases cloud and temperature structures exist, in storms for example, that can often overrule the tendency for inversions to set up at night.

Low pressure systems can negate temperature inversions to set up at night by hot,moisture laden air rising from the surface. This frequently happens around large bodies of water. Water can retain the heat from the sun for a long time after sunset and can thus halt temperature inversion at night.

Late autumn and winter can also bring cold clouds and air in the upper atmosphere that counter the temperature inversion at night and create a relative inversion where the air in the upper atmosphere is even colder than the air near the surface even during low levels of lower atmospheric heating where the sun is not as intense in these colder seasons. Hence windy conditions are more common in autumn and winter.

Along coastlines air near the surface of the water can remain warmer for longer near the surface than would happen inland. Hence air will move from the relatively warmer coastline towards inland. This becomes particularly apparent in autumn and winter.

So, in general, if we can expect high levels of clear sunshine then we can expect low levels of wind, and if we can expect high levels of wind then we can say that sunlight will most likely be very dissipated, along with a lot of moving cloud cover.

Hence having a hybrid solar and wind system can cover a great deal of weather conditions for powering an IOT module as continuously as possible.





Hybrid Solar and Wind Energy Harvesting Station Prototype


The hybrid solar-wind powered unit showcased here uses an integrated solar panel, vertical axis wind turbine (VAWT), energy harvesting cricuitry and a 3.7V Lithium-Ion polymer battery incased within a hollow but strong and durable clear perspex tube.



The tube is clear for solar energy to be gathered inside the tube during the day and to function as a "light pipe" of sorts for an energy efficient LED for illumination purposes as a demonstration.



The system is designed to be integrated together by using a small thin film silicon solar panel that has been folded inside the cylindrical perspex tube so that it can gather light from any possible angle without the need for tracking.

Although this technique would be inefficient for conventional solar panels, the thin-film flexible solar panel can generate voltage from diffuse light hitting it from any direction, so that it can generate power from dawn to dusk by solar energy while the VAWT generator can provide a power source whenever the wind is blowing.

Altogether, this design saves on space for installation for a energy harvesting power source for use in lighting, USB device charging, small IOT sensor stations, signal boosters/repeaters and so forth.

We can even place the pipe in conventional fittings, such as commercially available aluminium flagpole mounts. that allow us to move the direction of the pipe to work at whatever angle we figure is to be the best for energy gathering.

The union between energy harvesting and IOT devices has several hurdles to jump across. Hopefully by examining and incorporating more ways to harvest energy from the surrounding environment these hurdles can be worked on by engineers who want to make, in essence, networks of self-powering, efficient and highly versatile electronics.




Monday, 10 July 2017

Who Truly Benefits from Science?

Science is often portrayed in media as a self-evidently benevolent enterprise. Moreover, it is constantly portrayed in mainstream media, both in news and popular documentaries, as a continuously and eternally progressing enterprise where each new development somehow brings us to a world of wonder and whimsy with external consequences which can be either ignored or simply adapted to.

Curiously, it is also assumed that the rate of progress in science is completely linear and that in the next 100 years we are told we shall see rates of progress greater than or equal to the progress seen last century. So it is said in virtually all media, both in fiction and fact contexts.

It is strange therefore to compare this idea to the cycles of growth, maturity, decay and decline seen in the record of history itself where we see complex, albeit not scientific, societies emerge, grow and decay as a matter of record. All complex societies as they growing inevitably require more resources, more specification of the roles of participants leading to less freedom and ultimately more methods of taxation to continue the standard the civilization reaches at optimum once resources have been exhausted beyond a certain critical point.

The scientific method is the basic definition of what science is as a function of acquired knowledge. However, as scientific culture inevitably grew more complex, so too did specialization emerge and in the vernacular use of the term science really means many different things to different people be it right or wrong. At the very least, to some science is experimentation, to others theory. Moreover in ordinary comprehension science can be presented as popular science or often as simply technical wizardry. Computer science for example is in reality engineering but to very many people computer technology is a science as an example, the same can be said of social science.

More troubling is that in this age the products of science, namely technical gadgets, are often so much lauded and are in so much abundance that they begin to eclipse the methods of science that produced them. Often technology produced by industry is used as a kind of logo equated with the "benevolent" visions of science. To the appearance of many therefore science and industry have merely become one and the same. Therefore, we might ask ourselves, who benefits from what is seen in the modern sense as "Science".

In much of my own experience over the last few years working in research and industry, science and physics in particular, appears to be mainly functioning as at best as a service for industry. Very little science is done that do not have direct applications to industry and the marketplace. In effect science has all but become a research and development wing for corporations, with very few exceptions.

The corporations involved with making profit from science have it very much their own way, with very little risk, which is one of the main advantages of corporate structure to begin with. As mentioned before, the banner of "science" being self-evidently benign is a strong dogma in the minds of the public and politicians with very few exceptions. Therefore many corporations can conduct their "scientific" R&D operations under a kind of saintly halo unless costly and time-consuming investigations are launched, which are rare.

Moreover, the image of science being a benign enterprise leads governments to directly fund scientific research out of taxpayer funds. Hence corporations can easily use taxpayer funding as a continuous resource to garner future profits for themselves based on new discoveries and new techniques painstakingly generated in the lab often by highly intelligent, but often institutionalized, hardworking scientists.

Furthermore, the apparently self-evidently benign image and prestige of scientists shown in the media also leads to a demand on universities to educate more in the STEM (science, technology, engineering, maths) without questioning why it is better to educate a young person to spend their time studying one subject over another without invoking the condition that "Subject A is more useful than subject B" -one question being "more useful to whom and to what?" - It is of obvious benefit to high tech multi-national corporations; The STEM educational system trains and educates their future workers and ensures future profit keeps flowing.

Opportunity of employment and empowerment with STEM, undoubtedly true, is often cited as the main reason why STEM is so popular a topic to discuss in the context of education. Interestingly though there is always an increasing demand for STEM graduates and a bemoaning of large dropout rates, particularly among engineering majors. If there is such a demand for STEM educated workers from the side of corporations above what students want to study and are capable of studying in college then it raises suspicions as to which party the educators are working in the best interests of.

Much of the modern scientific enterprise therefore has a troubling amount of cyclical reasoning behind it. Science is funded by working taxpayers, those workers are encouraged to study STEM by taxpayer funded education, the workers operate in STEM fields earn a salary which is then garnished to provide funding for future workers and developments. The future developments are real and do benefit people, but it can be a selective few. After all, it is still a market system that modern science and modern technology exists in and profits they make are increasingly in the hands of the many over the few. There are many in the over-exploited regions of the world today that not only do not benefit largely from much of the scientific progress but are burdened disproportionately by the consequences of pollution and the weapons systems science has created in a not-so-neutral fashion.

By and large we might live more prosperously than our ancestors have thanks to modern scientific developments but we have carried with that prosperity an enormous burden and put a tremendous strain on the limited resources of both fragile humans and the Earth to accomplish this. We might all have to ask ourselves, is science really beneficial to everyone equally?

Wednesday, 9 November 2016

The Possibility of "Warp Drive" in the Context of General Relativity




After 36 years of travel, the Voyager 1 spacecraft had officially been announced by NASA to have entered interstellar space, the vast void of the galaxy that separates the stars. Pioneer 10, travelling in the opposite direction to Voyager 1, is the second most distant physical object made by humans. Pionner 11 and Voyager 2 come in 3rd and 4th place respectively, as shown in the diagram below.





These spacecraft will be humanity's first physical objects that leave the solar system. Although exploration of our solar system is nowhere near completion it is clear that in less than a century of exploration we have essentially traversed the solar system using our machines.



Due to the theory of Special Relativity there is a cosmic speed limit, the speed of light. 

The speed of light is a very big number by planetary standards, a beam of light in a vacuum travels 299,792,458 meters per second.

If the Earth's equator is 40,075.02 km long, something moving at light speed would make it around the equator about 7.48 times in one second. This makes it possible for a planet-wide communication system, such as the internet.



However, on a cosmic scale the speed of light seems less than adequate to traverse space, even using electronic signals. When New Horizon's reaches Pluto, it will be 5,906,376,272 km away from earth. This is 39.2 times the distance from the earth to the sun. It is common enough knowledge that it takes about 8  minutes for light to reach the earth from the sun so it takes roughly 5.2 hours for radio signals to reach earth from the probe and vice-versa. This means if we were using remote controls to steer the probe it would take 10.4 hours to perform a single command.  Therefore for spacecraft to go to the planets and work in real time they must be at least semi-intelligent and robotic. The further a probe must go from earth, the more automated and intelligent it must be. Spacecraft destined for the stars and beyond would have to be of an intelligence close to, or even possibly greater than, a human to make the trip worthwhile as it would be so far away that any problems would have to be solved as they happen.

On interstellar distanced, radio signals emitted from earth, which travel at the speed of light, travel staggeringly slow in terms of the timescales at which humans are used to. As a civilization we have been only transmitting radio signals of significant strength for 8 decades, a human lifetime, long enough only to create a small radio bubble around our region of the galaxy.


Even in about another 100 years time, our radio bubble from our radio communications will be minuscule compared to the scale of the galaxy.






An illustration of what a radio bubble (yellow speck) from a civilization transmitting radio signals for 200 years looks like on a galactic scale.


Special Relativity's cosmic speed limit essentially traps both humans  and our technology in a time bubble when separated in a vast expanse of space. However Special Relativity is not the only physical law governing space and time. There is another aspect of the theory of relativity which is far more complicated and has many mysterious predictions and this is General Relativity which is at the core of understanding motion in a non-classical context, where the concepts of action-reaction do not appear in the same way as we would understand if we were studying from a Newtonian context alone.



General Relativity and The Mach Principle: Einstein's view of Gravity

Newton famously discovered the concept of gravity when he apparently witnessed an apple falling from a tree at the same time he saw the moon set and asked the question, "If an apple falls, does the moon also fall?". This question led to development of Newtonian mechanics, the foundation of modern physics. Moreover, Newton developed calculus in order to quantify the curvature of objects in the gravitational interaction. In essence this also showed us that we can make great progress from a mathematical concept of the physical forces without really knowing the full nature of them. Newton knew how gravity worked by his calculations but he did not now the nature of the force. 

This lack of the true physical concept of gravity meant that there were mysteries in our own solar system long after Newton's laws were used to predict planetary orbits. Most famously, the orbit of Mercury displayed eccentricities that could only be accounted for if another planet, called Vulcan, was tugging at it from the opposite side of the sun. Vulcan was never discovered and for a very good reason. It did not exist.


The true concept of gravity came from the idea of spacetime curvature, summarized in the Einstein Field Equation which is a result of the theory of General Relativity. 




where R_{\mu \nu}\, is the Ricci curvature tensorR\, the scalar curvatureg_{\mu \nu}\, the metric tensorG\, is Newton's gravitational constant and T_{\mu \nu}\, the stress–energy tensor.

According to the Einstein Equation above, matter and energy tell spacetime how to curve and in turn spacetime tells matter and energy how to move. 



One of the purposes of introducing the concept of a tensors is the fact that it can represent a collection of attributes associated with some point in spacetime. As a generalisation, the Stress-Energy tensor [Tμν] can collectively describe the energy density, momentum, flux, pressure and shear stress associated with any unit volume of spacetime. The diagram below is representative of the potential scope of the stress-energy tensor:

2

Like the Ricci tensor, the stress-energy tensor is a rank (0,2) tensor and therefore can be expressed in the form of 4x4 matrix associated with the [t,x,y,z] components of spacetime.



  • In isolation, the 3x3 matrix [T11-T33] is sometimes referred to as the stress tensor as every element corresponds to the stress, i.e. force per unit area, in the [μ] direction that acts on a surface normal to the [ν] direction. 
  • With reference to the table above, the suffix [1,2,3] might be loosely associated with directions [x,y,z], where [T13] is the stress acting in the [x] direction  and [T33] is the stress acting on it in the [z] direction, with respect to a surface in the [xy] plane.
  • The diagonal elements of this 3x3 matrix, i.e. [T11T22T33], are stresses that can be interpreted as pressure, i.e. force per unit area. The remaining elements are shear stresses, which for simplicity can be set to zero for the scope of most discussions. 
  • In contrast, the element [T00] is the energy density, i.e. energy per unit volume, at a given point in spacetime. 
  • While the elements [T01T02T03] correspond to the energy flux, e.g. [T01] is the energy flux in the [x] direction per unit volume. 
  • Alternatively, the element [T10T20T30] correspond to the momentum density through a surface normal to given unit area per unit time.

This can initially appear to be leading towards a very complicated description of spacetime, but in the context of cosmology, spacetime is often modelled on the idea of a prefect fluid, where the complexity of the previous table reduces to:



In this case, the stress-energy tensor is defined by the matter-energy density [ρ] and pressure [P] of a unit volume of spacetime. Among other things, this can be correlated to a specific solution of Einstein field equation known as the Friedmann solution, which is more or less the basis for the entire field of cosmology: 



Friedmann equation and its terms. It is useful to point out that whenever we are talking about General Relativity we are usually talking about it in the context of cosmology, hence in many ways we do not use Einstein's equations directly but the solutions of them. Newton's laws, in comparison, dominate on interplanetary and interstellar scales and even then when we are talking about Newtonian mechanics we usually use the solutions which incorporate Newton's Laws, such as Kepler's Laws.  


The fundamental difference between Einsteinian  and Newtonian laws for gravity is how mass density is the source of gravity in Newton's Law for gravity, whereas the energy and momentum density in spacetime is the source of gravitation in Einstein's General Relativity. 

However, the Einstein equation makes a further step not only by relating spacetime curvature and the motion of a matter-energy system but by providing the implication that accelerated motion and the effects of gravity are themselves not distinguishable. 

Therefore, In a local medium, acceleration motion should generate an inertial reaction force under this theory.


From Newtonian mechanics, the property of inertia is an inherent property of matter that is independent of all other things in the universe. It is unaffected by the presence or absence of the other matter elsewhere in the universe. This is one interpretation of Newton's Third Law.


General relativity theory says that the origin of inertial reaction forces in accelerated local objects( accelerated relative to the "fixed stars") creates a force indistinguishable from a gravitational "field" created here by the presence of distant matter apparently acting on the accelerating object.

This was understood under the famous, elevator thought experiment.

This was one of Einstein's largest guiding factor's for developing his theory of General Relativity in the first place and was what he referred to as the Mach Principle.


Consider the following



You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. This is the centrifugal force, but why should it act when the stars are whirling but not act when you are standing still; if all reference frames are equal, then why should the change in symmetry matter?


The Mach Principle implies that this is not just a coincidence and that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, then there is some physical law which would cause the centrifugal force to lift your arms. Moreover it should also appear to an outside observer that you are under a centrifugal 
force.


Nowhere is this more obvious than the Planet Jupiter, which appears as an oblate spheroid with a flattened pole and bulging equator which can be made out even in an amateur telescope.
Due to Jupiter's rapid rotation the effect of the centrifugal force is obvious. Earth is also an oblate spheroid, with the poles being slightly flatter than the equator.




The summary of the idea of the Mach Principle, although vague, is essentially this:
"mass out there influences inertia here and does not influence itself". 


In General Relativity, spacetime is "curved", and momentum(and hence force) at a point cannot be directly meaningfully corresponded to momentum(and force) at a different point in spacetime. To correspond quantities from different points in spacetime, we need something called "parallel transport" which maps the momentum(or force) space at one point to the momentum(or force) space at another point, given a path. As we can see, the resulting momentum(and force) at the destination is dependent on the path chosen and the particular geometry of the particular spacetime, i.e whether you are in a stationary or rotating frame of reference. So forces at a distance, as described by Mach's Principle, cannot really obey Newton's Third Law under local symmetry. 

Under closer analysis, Newton's third law is really a meta-law and is fundamentally equivalent to conservation of momentum. This is a consequence of Noether's Theorem which states that any change of the equations of motion under an explicit symmetry must result in a conservation law.

Physicists and mathematicians define a “symmetry” as a coordinate transformation that can be done to a system that leaves its essential features unchanged.
A circle has a lot of symmetry, as we can rotate it around the middle by any angle, and after the rotation it remains the same circle. We can also reflect it around an axis down the middle. A square, by contrast, has some symmetry, but less — we can reflect it around the middle, or rotate by some number of 90-degree angles, but if we rotated it by an angle that wasn't a multiple of 90 degrees we wouldn't get the same square back. A random scribble doesn't have any symmetry at all; anything we do to it will change its appearance.



More exactly, Noether’s theorem says that if you can continuously change some coordinate variable \xi without changing the environment, then there is a conserved quantity P_{cons} equal to
P_{cons} = \partial(K.E.)/\partial \dot{\xi},
where K.E. is the kinetic energy and \dot{\xi} is the time rate of change of \xi.  As an example, the kinetic energy can be written \frac{1}{2} m ( \dot{x}^2 + \dot{y}^2 + \dot{z}^2 ), so that if the system is unchanged by translations along the x direction, then the conserved quantity is P_{cons} = m \dot{x} = m v_x, which is the momentum in the x direction.
\hspace{10mm}
Noether’s Theorem also allows you to identify less obvious conserved quantities.  For example, imagine that the force-emitting object is a cylinder with a helical coil wrapped around it, like this:
This environment is no longer unchanged by small translations in the x, y or z directions, nor by small rotations around any of the axes.  It is, however, unchanged by a particular combination of translation and rotation.  Specifically, if d is the distance between coils of the helix then the environment is unchanged when you simultaneously rotate 360 degrees around the x axis and translate by d in the x direction.  Any small translation/ rotation done in that same proportion also leaves the environment unchanged.  Noether’s Theorem therefore guarantees that a particular combination of linear momentum and angular momentum will be conserved forever.  

Specifically, you can work out from the equation above thaL_x + \frac{d}{2\pi} P_x is conserved, where L_xis the angular momentum around the x axis and P_x is the linear momentum

Energy conservation appears naturally from Noether’s Theorem when you assume that the environment is symmetric with respect to translations in time.  Momentum conservation appears in environments which are symmetric in space.

In all cases, the violation of Newton's 3rd law in a local symmetry is a result of conservation of momentum. 

In General Relativity the gravitational "field" is really an illusion in a sense, an illusion created by how mass-energy density warps spacetime and how gravitational fields propogate in advanced disturbances creating the effect of the inertial reaction force. We might be convinced then that the issue of "action at a distance" to describe the effect of Ernst Mach is eliminated.

In this sense the fields described in General Relativity are just book-keeping devices for the delayed interaction of sources.

To contrast, in Quantum Field theory, the field itself is considered a physical quantity and carries with it an energy density. In Quantum field theory, excitations in the field are in effect quantised in a form of angular momentum in a rotationally symmetric field space, thereby eliminating the standing notion that there is something particularly different between the nature of the field (electromagnetic, strong, weak) and interactions within it. 

Variables such as spin, charge, quark color and so forth are quanta of the field in this view. 

General Relativity however still views the nature of the gravitational field as a book keeping device for delayed interaction of sources as it still sees the field itself as carrying an energy density with no explanation as to how the field quanta influences particles.

General relativity was emphasized by the geometrical picture of gravity. Fundamentally however, General relativity must itself be some form of lower order theory which arises from a theory of gravitation from spin-2 particles of mass-0, i.e. gravitons. 

The initial form of general relativity was made to be as simple as possible, which is a result of Einstein himself using only 2d-order Partial Differential Equations (PDEs), rejecting the higher order (3rd, 4th, ect)  PDEs.

An Effective Quantum Field Theory for gravity would not only include the higher terms but would also include coefficients on the geodesic length which would be on the order of Planck's Constant.

This would mean that most of the higher order effects would only be seen at the order of the Planck Length, i.e. at the Big Bang or inside a Black Hole.

Moreover, the very nature of General Relativity always generates singularities when you do any calculations in modelling how it moves small amounts of matter, such as atoms or electrons, using the Path Integral which is the most fundamental comprehension of how objects move.

In effect, the higher order effects of an Effective Field Theory of gravity has to be modeled independently of the lower order effects to make sensible predictions of its interaction with elementary particles. Something which still lies beyond the scope of current theoretical physics.

Studies in the field of quantum gravity have led to the creation of different descriptions and models of spacetime metrics. The creation of different metrics to expand General Relativity can sometimes create theoretical models that radically changes our concept of motion.

The most famous metric that models exotic motion in spacetime is Alcubierre's metric, so it can be used as the archetypal probe into how to model exotic motion in spacetime.



Alcubierre Metric: The Idea of a Stable Warp Field



In 1994, Miguel Alcubierre developed a geodesic equation to describe space-time warped in a bubble around a ship, creating a "warp drive".


The warp drive proposed by Alcubierre could achieve near light speeds and even faster-than-light speeds by distorting space-time. To accomplish this, a theoretical device would generate a field of negative energy that would squeeze or stretch space-time, creating the bubble. The bubble would ride the distortions like a surfer on a wave. 

As evidenced by cosmic inflation in the big bang, in certain conditions space-time can expand so quickly that objects can move faster than the speed of light.



The theoretical basis for the operation of this experiment is that a massive object causes spacetime to curve and in-turn spacetime tells a massive object how to move and accelerate.


It is postulated that spacetime curvature can be modified using powerful electromagnetic fields to reduce the inertial mass of a starship. 





By bending spacetime in a particular way you can make it so that locally you move slower than light, but that the overall effect is faster-than-light travel.


By bending spacetime in a particular way you can make it so that locally you move slower than light, but that the overall effect is faster-than-light travel.  The arrangement of matter and energy that allows for this is unfortunately impossible.  This diagram is from page 145 of “Gravity”, by Hartle.



In the weakest implementation of this theory a starship can be made to accelerate as if the inertial mass of the starship were reduced making near light speed possible using simple electric thrusters. However, in the most advanced implementation of this experiment when the energy of the electromagnetic fields cause the inertial mass of a starship to become imaginary the starship in the warp bubble will become tachyonic and will be capable of moving faster than the speed of light. In its advanced form the object in the warp bubble is isolated from the rest of the universe allowing the warp bubble to become a local frame of reference where Faster Than Light (FTL) travel does not violate the local speed of light (c).

A Tachyon is a hypothetical particle postulated to move at a velocity greater than the speed of electromagnetic radiation, such that as the particle accelerates it loses energy. Of the two properties rest mass and energy, one must be real and the other imaginary. If a tachyon exists it may be detected through the emission of Cerenkov radiation (a kind of electromagnetic shock wave) in a particle accelerator or by cosmic ray collisions.




The Alcubierre metric was the first attempt to design a theoretical model that can make some predictions about what is necessary for a warp drive.






Here also, is some plots showing some of the key aspects of the model including the Alcubierre Metric,  Light Cone and Energy density around a hypothetical warp ship.




 Miguel Alcubierre's famous warp metric is of the form:

ds2=-dt2+(dx-vsf(rs)dt)2
with
vs(t)=dxs(t)/dt

which is simply the velocity of the system, in classical mechanics this is given similarly through v=dx(distance)/dt(time).  The d term arises through calculus, where one receives the geodesic relation for a curvature (i.e. an arc circle) and the line path.  Also note that for consistency the terms dy2+dz2 are needed in the first equation, however, is not needed directly to understand the warp theory, and is removed to make the equation easier to handle.  The rs term is given through

rs(t)=[(x-xs(t))2 +y2+z2]1/2

neglecting the y and z components it is the difference between the original coordinates and the warp drive coordinates.  Where a localized region of space is propelled through the x direction (to the right in the figure below) by a velocity determined through the function f(rs) which resembles a "top hat" function (given through trigonometry):

         tanh(s(rs+R))-tanh(s(rs-R))
f(rs) = -----------------------------------------
    2tanh(sR)

This metric supposes a contraction of spacetime in front of a body, with a expansion behind it.  The expansion and contraction can be seen through the coordinates x and r=(y2+z2)1/2 (which is shown in the figure below).


Using such a metric, generated around a ship,  we can picture a craft surfing what is essentially a gravitational wave front:






(1) The vertical dimension represents how much a given volume of space-time expands or contracts in Alcubierre’s model. Positive values [red] imply an expansion in space-time caused by negative mass-energy density in Einstein's theory of General Relativity. When space-time expands behind a craft, it propels the ship forward.
(2) Inside the warp bubble, neutral space-time would leave the ship undisturbed. Passengers would experience a zero-G environment.  Artificial gravity can be created in a portion of the ship using rotation to create a stable centrifugal force.
(3) Negative values [blue] imply a contraction in space-time caused by positive (i.e."normal") mass-energy density. The contraction balances the expansion of space-time as the bubble moves forward. Combined this allows the ship to "surf" the gravitational wave front.


These plots were developed using a Matlab code I wrote which is available for copying at the end of the page.



In fact what Alcubierre proposed as a "warp drive" is using a form of bipolar (or "dual") gravitational waves as a method of propulsion.  Gravitational waves in general relativity are planar and hence each wave expands and contracts, however, the Alcubierre metric in principle suggest that such an effect could be bipolar, possibly explaining the necessity for a "negative mass-energy density" requirement.

What this metric truly suggest is that such a manipulation of space would cause spacetime to propel a localized region of space (refereed to as a warp bubble) by expanding and contracting the metric field.  Since gravitational radiation is believed to propagate at the speed of light the prolusion of this space is similar in principle as to how electric and magnetic field cause electromagnetic radiation to propagate.



Another way Alcubierre's warp geometry can be made a reality using the Van Den Broeck metric:

ds2=-dt2+B2(rs) [(dx-vsf(rs)dt)+dy2+dz2]

The basis of this model is to shrink a "warp bubble" (this refers to the flat spacetime within Alcubierre's warp metric) to microscopic proportions to negotiate around the negative energy conditions.  This is beacuse the basis of the Alcubierre metric requires an enormous amount of negative energy, which according to classical conservation laws shouldn't exist.  So the Broeck metric shows basically how to shrink the "Alcubierre warp bubble," so that it requires less "negative energy."


This however only effects the external properties of the warp bubble while internally the effects of the bubble could be as large as one wished (this deals with the construction of energy densites within the field).  The main benefit of this theory is that it dramatically lowers the negative energy requirements, thereby making warp drive look feasible with an advanced technology.

Nevertheless, such a metric requires a mass-energy density the size of Jupiter concentrated in a region around a football field. Engineering specifications of such a device is therefore literally in outer space and no sane person would, or should, fund it.

Ideas about how such a device would work are, however, free. Theoretical physics is the cheapest form of science to fund, even cheaper than mathematics. Unlike mathematics where the mathematician only needs a pencil, paper and a waste paper bin for the ideas he throws out the theoretical physicist can be kept happy without the waste paper bin!

As Richard Feynman once joked in his lecture series on the "Character of Physical Law" "Every theoretical physicist has at least 6 different competing theories for the same phenomena floating around in his head, each describing the same phenomena in a different way but coming to similar conclusions"

However, as with any theory, there are loopholes around such scenarios as faster than light travel which lead to paradoxes. One such paradox is that if a ship is travelling faster than c, then should'nt it appear to be travelling backwards in time? Paradoxes should be therefore expected as we do not fully understand the true nature of gravitational interactions particularly with atomic or subatomic phenomena.


Physical Principles needed to "Build" Alcubierre's Warp Drive



The warp drive proposed by Alcubierre could achieve near light speeds and even faster-than-light speeds by distorting space-time. To accomplish this, a theoretical device would generate a field of negative energy that would squeeze or stretch space-time, creating the bubble. The bubble would ride the distortions like a surfer on a wave. 

As evidenced by the uniformity of the Cosmic Microwave Background from the Big Bang, which is explained by inflationary cosmology, space-time can expand so quickly that objects can move faster than the speed of light. 
Therefore the current models of physics generally allow for the existence of a warp field that can accelerate objects faster than the speed of light.

The real questions to ask is whether or not such a warp field can exist on macroscopic scales and if so can it remain stable for long enough to observe its effects, on light in a laser interferometer for example.

Moreover it is unknown how it is technologically possible, i.e under what conditions does matter allow for the creation of a negative energy density?

In 1948, Theoretical Physicists Hendrik Casimir and Dirk Polder proposed that a negative presssure can exist due to quantum vacuum fluctuations operating on very small scales in space and time and that if two uncharged metallic plates in a vacuum, placed a few micrometers apart the quantum fluctuations should create a force between the 2 plates due to a differential vacuum energy density between the inside and outside of the plates.

Classical Experimental Setup of the Casimir Effect 


In a classical description, the lack of an external field automatically means that there is no field between the plates, and no force would be measured between them. However when the zero-point field is instead studied using the QED vacuum of quantum electrodynamics, it is seen that the plates do affect the virtual photons which constitute the field, and generate a net force.

The force can give either an attraction or a repulsion depending on the specific arrangement of the two plates. 

Although the Casimir effect can be expressed in terms of virtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of a quantized field in the intervening space between the objects. 

It was not until 1997, however, that a direct experiment, by Steve Lamoreaux, quantitatively measured the force (to within 15% of the value predicted by the theory)

Previous work in the 1970's had observed the force qualitatively, and indirect validation of the predicted Casimir energy had been made by measuring the thickness of liquid helium films by Sabisky and Anderson in 1972. Subsequent experiments with liquid Helium-3 approach an accuracy of a few percent. 

Using Bose-Einstein Condensates it may also be possible to suppress background effects occurring between individual molecules, such as the Van Der Waals Forces, which will help to quantify the necessary boundary conditions in the second quantisation calculations of Quantum Electrodynamics. This would allow the effect of the vacuum to become dominant in the medium and allow for a more direct observation of negative energy density affecting it.

This could also allow for further studies in solid state physics on how the Casimir effect could be controlled at the nanoscale and what physics and applications can be gained from it. 

Some of this research, although abstract, may help us to understand how electronic transitions occur at the smallest of scales and how to suppress noise, such as that caused by the Casimir effect, in nanoscale circuitry, such as in the emergent fields of quantum circuitry. 


Therefore by probing some deep questions of physics, and examining theories such as "Warp Drives" we may uncover a great deal of knowledge and gain proposals for some interesting experiments, perhaps even stumbling upon the foundations of warp drive itself along the way.




Some final thoughts...



The fact that we can even ask some of these questions concerning negative energy fluctuations, higher dimensions and the controlled warping of spacetime is a testament to how advanced topics in science can capture the imagination and motivate us to look beyond parochial assumptions. Such thinking is healthy for the imagination and helps us realize that we have some potential to achieve great powers through the use of our intelligence and imagination.




The physics of "Warp Drive" is complicated, to say the very least, and may be considered far-fetched by today's standards, perhaps more far-fetched than flying carpets are to supersonic aircraft.  

One question that could be asked is "how can humans even begin to manipulate space and time?". More understandable of course is the response that "this sounds like science fiction". However, we should not let that statement deter us from asking questions. Although nature has not thrown warp fields at us the same way it has thrown lightning, earthquakes and starlight at us does not mean that they are impossible to create. 

To put this into perspective, consider again the analogy of magnetism. The interstellar magnetic field is about a nanoTesla, or about one-fifty thousandth of the Earth's field, which ranges from about 25 to 65 microTeslas at the surface. This is staggeringly small, as is Jupiter's magnetic field is about 10 times stronger than Earth's.  Even our Sun's field, though extensive is nothing particularly alarming with sunspot's, where the field lines reach their most intense activity, reaching 0.5 Teslas at most.  

If this is all we knew about magnetism, harnessing magnetism for any practical purpose would seem unlikely and achieving field strength's to rival the stars would seem far-fetched and fictitious. 








However, a tiny Neodymium magnet you can hold in your hand exhibits magnetic fields of 1.3 Teslas, 100 million times stronger than the interstellar field and almost 3 times as large as sunspot fields. 








Therefore even on the stellar scale humans can best nature at some things. Although manipulation of gravity might seem beyond conventional engineering the fact that the idea can be discussed in the context of General Relativity fairly easily will no doubt continue to allow it to be used as an interesting introduction to some of the physics and mathematics used in General Relativity in general, which already has applications and presence in astronomy and space science today. 





Matlab Code for Alcubierre Warp Drive Model:


[x, y] = meshgrid([-10:.1:10],[-10:.1:10]);

%Radius R
R=3;

sigma = 1;

%Ship's Position
xs = 0;

for i=1:length(x);
for j=1:length(x);
    
    %Alcubierre Warp Metric 
z(i,j)=-1*(tanh (sigma*sqrt(abs(x(i,j)^2+y(i,j)^2-16))+R)-tanh(sigma*sqrt(abs(x(i,j)^2+y(i,j)^2-16))-R))...
*tanh(sigma*R)*x(i,j);

%invariant length of space

rs(i,j) = sqrt(abs(x(i,j)^2 + y(i,j)^2 + z(i,j) )) ;

%Energy density function


E(i,j) =  (tanh(sigma*(rs(i,j) + R)) - tanh(sigma*(rs(i,j) - R)))/(tanh(sigma*R)) ;
end

end

%Light Cone
figure(1)
mesh(x,y,rs)
axis([-10 10 -10 10 -10 10 -10 10])
view([158,26])
colormap(jet)

%Energy Density 
figure(2)
mesh(x,y,E)
axis([-10 10 -10 10 -10 10 -10 10])
view([158,26])
colormap(jet)

%Alcubierre Warp Metric
figure(3)
mesh(x,y,z)
axis([-10 10 -10 10 -10 10 -10 10])
view([158,26])
colormap(jet)