Astronomy

How to project galaxy data in x y z coordinates?

How to project galaxy data in x y z coordinates?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

For a data visualization project, I want to extract galaxy data (in x y z coordinates) from galaxy catalogue of within 150 mpc and project each of those galaxies as dots and visualize them in a 2D projection of a spherical volume. Need your help in getting this data set and how to achieve it?

Would be grateful for any help in any way!


You could try the Hyperleda database. You will have to learn to use the database, which possibly means learning a little bit of SQL. For starters you could select all items with redshift $cz < H_0 imes 150$ km/s, using redshift as a distance indicator. This should be roughly ok at 150 Mpc (though I don't see any viable alternative).

The returned catalogue should have (at least) RA, Dec, redshift, possibly a size and morphology and, for a small subset of the galaxies, a redshift-independent distance estimate. Given that very nearby galaxies (in the local group or a $d < 50 $Mpc) are not fully part of the "Hubble flow" you might choose to use these in preference to a redshift-distance where available.

X,Y,Z coordinates (based in the equatorial system) can be calculated using this prescription or in Galactic coordinates (with respect to the centre and orientation of our Galaxy can then be found using the operations defined in Johnson & Soderblom (1987).

I have no idea how complete this database will be. Certainly there will be an issue with extinction near the Galactic plane.


Importing Large Number of Points from CSV with x,y,z coordinates to independent points

Hi everyone. I've seen a discussion about importing key-frames and some brilliant work-arounds people have implemented. Action Pro would give a method now to import CSV as key-frames. But not as individual and independent points.

I have some astronomical simulations and animations I do for education and science at an astronomy research institute. I have access to a database where I can grab an x,y,z coordinates (actually polar, and I have to convert).

Any idea if it's possible to use a CSV data file (let's just say 100 points) and import each as a point, wherein I can then parent a "star light" to each of them?

Honestly, if I could pull that off, that's something I would be willing to share. It exists in custom software here, and some other video software that cannot export those values into HitFilm (go figure).

That would be ingenious! I am using it as a background as opposed to fly-through. It would be freaking incredible to be able to pull in 1000+ objects as background stars (if anyone knows what GAIA is).

Otherwise, I create an artificial star system background either from a 2D image or particle simulator. I have a number of animations I'd love to do that for. If anyone has a way, I would post a sample created in HitFilm.


How to project galaxy data in x y z coordinates? - Astronomy

The Terragen Sphere is extended over a large part of the Orion Arm of the Milky Way Galaxy, and extends into the inner Sagittarius Arm and out to the Perseus Arm. It also extends above and below the main plane of the galaxy, although stars and colonies are spread more thinly there. But how are the various directions of the Terragen expansion defined?

Planetary terms like north, south, east, west, are insufficient for referring to directions within the galaxy. Instead, the following conventions have achieved widespread acceptance when referring to direction:

Toward the galactic core is coreward away from it, in the direction of the rim, is rimward. In the direction in which the galaxy is rotating is spinward, and the opposite direction is counterspinward or trailing.

The widely used system of Galactic Cartesian Coordinates (generally known as Galactic xyz coordinates) give locations in terms of the distances along the xyz axes of a grid centred on Sol, with the x-axis being coreward, the y-axis spinward, and the z-axis towards Galactic North.

These standard translation between galactic directions and constellations is
above - Coma (Galactic North)
below - Sculptor (Galactic South)
corewards - Sagittarius
rimwards - Gemini or Taurus
spinwards - Cygnus
counterspinwise - Vela

Although these directions are widely used, sometimes the term coreward in the Outer Volumes is also used to indicate the direction of the Inner Sphere. There is some potential for confusion if the term is accepted out of context, and in the Astrographic Institute on Mekelon has recommended that Solward be used instead in this context.

Another source of confusion is the use of 'up' and 'down' by astrographers in the Communion of Worlds these use a convention where 'up' means towards Galactic South, and 'down' means toward Galactic North. This may be due to the fact that many of the inhabitants of this empire originate from the southern Hemisphere of Old Earth, or possibly they use the alternate convention of 'rotational North' which is always taken from the anti-clockwise pole. The true reason is lost in the depths of galactic history.

Another way that directions are defined is by simply using the names of the ancient constellations as seen from Old Earth. Although most of these constellations are unrecognisable as soon as one travels more than a few tens of light years from Sol, the directions they define are still useful. Thus the Sagittarius Transcultural Cooperation is mostly located in the Sagittarius direction, while the Orion Federation is partly located in the Orion constellation as seen from Earth (although the capital is actually in Monoceros).

Most of the locations in the Orion's Arm Civilsation can be found in the sectors defined by those constellations which were aligned with the Milky Way as seen from the Solar System these include Sagittarius, Scorpius, Ara, Norma, Lupus, Circinus, Centaurus, Crux, Carina, Vela, Puppis, Canis Major, Monoceros, Orion, Gemini, Taurus, Auriga, Perseus, Cassiopeia, Lacerta, Cepheus, Cygnus, Vulpecula, Sagitta, Aquila, Serpens and Scutum.


The Geometry of the Local Supercluster

Our Milky Way galaxy is a member of the Local Group of galaxies which lies on the outskirts of a large structure of galaxies now called the "Local Supercluster." Although it was first identified on very early sky maps of the distribution of nebulae (e.g. Messier's catalog of nebulae and the New General Catalogue of Dreyer), it was only first recognized as a dynamical entity abouy 50 years ago by Gerard deVaucouleurs after early measurements of the redshifts of its brightest members confirmed that they were generally all in the same volume of space and relatively nearby. DeVaucouleurs (1956) originally called the structure the "Local Supergalaxy," but in 1958 he renamed it the Local Supercluster (LSC). We now know of many structures similar or larger in size, often containing more than one galaxy cluster, and generally called Superclusters.

The LSC is roughly centered on the Virgo Cluster, the nearest cluster of galaxies containing several hundred bright galaxies in a volume a few Megaparsecs (roughly 10 million light-years) across. As can be seen by looking at the sky map below, it is a flattened, planar structure with a halo of other groups and clouds of galaxies. The Virgo Cluster itself, subject of another exercise, is at a distance of approximately 16.7 Megaparsecs (55 million light-years) as determined by distance measurements with the Hubble Space telescope.

About 30 years ago, with the discovery of the ubiquitousness of dark matter and the realization that the LSC should have a measurable effect on the Local Group (Peebles 1973 Silk 1973 Gunn)

The purpose of this project is to study the properties of the Local Supercluster with a new and deep all-sky survey of galaxies, the 2MASS Redshift Survey (2MRS). We will measure several global properties of the LSC, including its size and shape. We will examine maps of the distribution of galaxies in the supercluster and the distribution by morphological type.

If you look at the redshift space distribution (we plot right ascension and the apparent radial velocity), "Virgo Finger," you can see the "Finger of God" effect which is the stretching out of the galaxy positions in "Redshift" space (versus real 3-D "Configuration" space) caused by the rapid motions of the galaxies in the gravitationally collapsed cluster core. Galaxies in the cluster are moving much faster than they would be if they were simply part of the uniform expansion of the Universe --- gravitational collapse and relaxation does that.

I have atttempted to correct the supergalactic x,y,z positions in the data file above for these motions by "collapsing" the redshift space coordinates of the galaxies in the cluster core, defined as being projected within 6 degrees of the center of Virgo, using the approximation formula:

Which should shrink the finger by a factor of six, which is its approximate stretching in Virgo relative to pure Hubble expansion. This of course might make your map look a little funny, but its the best we can do so as not to be dominated by the finger-of-God effect.

P.S. Can you think why the finger in redshift space points back towards the origin (us)? That's why its called the finger-of-God.

What next? The first thing to think about is the definition of the plane of the local supercluster. There is a well known and used definition that dates back to the early days of large galaxy catalogs and that was created/defined by Gerard deVaucouleurs and his collaborators. (See "The Second Reference Catalogue of Bright Galaxies" by deVaucouleurs, devaucouleurs & Corwin.)

A very useful thing to do is to plot the data in various ways. One might be to transform the coordinates to the supergalactic system of dV et al. and plot our catalog in that system. I will look for a way to simply transform our catalog into those coordinates, much the way we transformed RA & Dec to Galactic L and B. But you should think about coordinate rotations. Also remember that we see more of the near side of the LSC than the far side.

What are the questions we want to answer and how can we do that?

A hint on size, we often describe the "sizes" of fuzzy things by contours, either relative to the central brightness or to some mean level. In the case of the galaxy distribution, "overdensity contours" is a popular choice. You essentialy need to calculate the mean density of the sample volume, probably best done with a volume limited sample, and then at each place in the survey, compute the density relative to that mean. Look at the shape of the volume contained by places with densities greater than, say, 2 times the mean or perhaps 1.5 times the mean or something like that.

Another, harder description of "size" is to fit a function to the shape of the density distribution and use a parameter of the fit to define size (e.g. full width half maximum, or the volume that matches the shape of the distribution and that contains half the total luminosity, or something like that.

Some simple plots showing the structures in supergalactic cartesian coordinates are here:

Note that these plots are, like our original data set, centered on the Milky Way. These plots were made using a plotting program/system called Supermongo. An example program is plane.sm . This will run on any of the CfA unix/linux systems provided all the data files are linked correctly and the proper software links are created in your login command file.

The next issue to think about is the fact that our sample is an "apparent magnitude" limited sample. That means we can see pretty much every galaxy nearby but only the most luminous ones at the edge of the sample. The absolute magnitude of a galaxy in the survey is

Where m is its apparent magnitude and D is the distance in Megaparsecs given by its velocity divided by the Hubble Constant --- for us D=v/(70 km/s/Mpc) and is tabulated in the data file. The limiting absolute magnitude for the sample selected for you is thus

However, when you recenter on Virgo and cut again at, say, 2000 km/s around that center, the limiting mag gets a little fainter (we're not looking out quite so far) and becomes something like

That's because we're not in the end looking all the way to 4500 km/s but only 1100 + 2000 km/s. Note that 11.25 is the magnitude limit of our survey (see above).

Thus you can construct a magnitude limited sample by calculating the absolute magnitudes of the galaxies in the survey using the equation above and then creating a new sample, centered on Virgo, but cut to be brighter (more negative) than -22.0. That will be a sample smaller than the one you ended up with after the coordinate shift and radial cut, but it will uniformly sample the volume.


Neo4j 3.0 and the Graph of the Universe

I checked out the original website of Cosmic Web, which is beautifully done.

Their paper describes the work of correlating galaxies in our cosmos by different means.

    • Fixed-Length Model: All galaxies within a set distance of l are connected by an undirected link.
    • Varying-Length Model: The length of each link is proportional to the “size” of the galaxy, l = a * R(i) ^ (1/2)
    • Nearest Neighbors Model: Each galaxy is connected to its closest neighbors with a directed links. In this model, the length of each link depends on the distance to the nearest galaxy.

    The last model provided the most accurate representation of the real-world constellations.

    Graph Visualization

    A visual artist, Kim Albrecht, visualized the resulting graphs beautifully using Three.js.

    Working with Raw CSV Data

    Fortunately for me, the raw sources for this dataset were CSV files with the galaxies forming nodes and the different relationship types that represent the means for connecting them described in their research.

    Importing Data into Neo4j 3.0

    With Neo4j 3.0, I could quickly import them using the LOAD CSV mechanism, here is the full script.

    The only trick I had to pull of was to collect the galaxies first into a list, to get an index for their row in the CSV. That’s why loading the node-CSV takes longer than the relationships.

    Query & Visualize in the Neo4j Browser

    But running the import gives me some nice visual results in the Neo4j Browser.

    Neo4j 3.0 Bolt Binary Protocol Test

    With Neo4j 3.0, I wanted to test the performance of the new binary protocol (a.k.a. Bolt). So I grabbed the JavaScript [neo4j-driver from npm], and retrieved all 211k neighbourhood relationships in one go. Just pulling the data and measuring the outcome is easy, as you can see below.

    Force Layout Graph Visualization with ngraph

    Although I have no artistic talents whatsoever, I could at least try to load the data from Neo4j into Anvaka’s ngraph and let its force layout algorithm do the work.

    Please note that the artistic three.js visualization mentioned above uses pre-laid-out data, the x , y , z coordinates are still available as properties in the data.

    But I wanted to see how well ngraph can load and layout 200k relationships without any preparation just in JavaScript.

    The loading was quite quick, like before. The force-layout did take some time though, but resulted in a really nice two-dimensional graph rendering of our cosmos.

    Everyone can import the data on their own quickly by running my import script, after starting your Neo4j 3.0 server. (You might need to confige in conf/neo4j.config that the “remote-shell” is enabled.)

    Conclusion

    This was only me having fun with galaxies and Neo4j 3.0 around 3 a.m. If you want to read and hear from a real graph-astronomer, check out Caleb W. Jones’ work on “Using Neo4j to Take Us to the Stars”.

    P.S. Graphs are everywhere – even our cosmos form one.


    Want to try this out for yourself? Click below to download Neo4j 3.0 and test it out for your next project or application.

    Email me blog updates!

    The information you provide will be used in accordance with the terms of our privacy policy.


    Doin' Real Science: Simulating Particles

    Many times on this blog, I challenge the purveyors of various pseudo-sciences to demonstrate their claims to the same standards that mainstream scientists must meet. One of these criteria is that real science must make testable predictions. In the physical sciences, this usually means numerical predictions which can be compared to actual measurements, whether in situ or observations. When they're not evading the question completely, pseudo-scientists and their supporters construct all manner of excuses why they should be exempt from these standards.

    To make the numerical predictions, modern scientists usually rely on computers to do the repeated mathematical calculations (sometimes called 'number crunching') that these types of predictions require. Once the programs are written, the computer can perform them tirelessly.

    One tool in many researchers toolbox is some type of particle simulation, used to study interactions and motions of many particles. Codes written for an arbitrary number of particles, N, are usually called N-body codes.

    I first wrote my own N-body code in 1979 in AppleSoft BASIC on an Apple II computer (wikipedia), in my early college years (the FIRST time I tried to complete my degree). Back then, computational libraries for solving differential equations existed only for large scientific systems. On smaller computers, you had to write your own, but plenty of documentation of the techniques was available. One of the most popular techniques for solving these types of differential equations are the Runge-Kutta integrators (wikipedia).

    Today, computer hardware is readily available, as are numerical processing libraries. Numerical N-body solvers are simple enough that a competent programmer in high school could write their own.

    So for this project, I've updated my N-body code. I've written the new version in Python (v2.7) and using the Python numerical libraries: numpy, scipy, matplotlib, and others (for more info on the installation). I've also written an interface to generate rendered output using the POVray rendering package.

    In testing these types of codes, we usually perform runs on simple configurations where the solution is available in a known form. Here's the results from a 2-body gravitational run, the Kepler Problem (wikipedia). The program allows me to generate tables of particle positions once, after which I can read these tables and construct different types of plots. Here I plot the results of the run in reference frame in which the velocities and positions are originally defined.

    Note that for the two objects, there is a point that lies on a line between the two particles that represents the center-of-mass (wikipedia), or barycenter, of the two particles. In this configuration, the center-of-mass moves in a straight line and at a uniform speed. This is a property of the center-of-mass that can be mathematically proven for a system of particles interacting by a central force (where the force acts along the line between the particles).

    Now let's define some of our mathematical terms:

    x(A),y(A), z(A) = x, y, z coordinates of object A in the original coordinate system

    x(CM), y(CM), z(CM) = x, y, z coordinates of center-of-mass in original coordinate system

    Next we can plot the exact same data points, but this time we re-compute their positions in a reference frame where the center-of-mass is not moving. We do this by computing the center-of-mass of the two particles at each time step (x(CM), y(CM), z(CM)) of the simulation, and then compute the position of the particles relative to that center-of-mass:

    x(A,CM), y(A,CM), z(A,CM) = coordinates of object A in the CM coordinate system, where the center-of-mass is translated to the origin, (0,0,0)

    x(A,CM) = x(A) - x(CM)
    y(A,CM) = y(A) - x(CM)

    For simplicity in plotting, I've restricted the simulation run to motion in 2-dimensions, x and y, with z=0. For this type of problem, it can also be mathematically proven that if the particles start with all positions and velocities in the z-direction equal to zero, they will remain zero at future times.

    Now we can plot the particle trajectories in the CM reference frame, which generates the following plot.

    Now the two particles revolve around their mutual center-of-mass (position 0,0 in the plot), always keeping the center-of-mass on the line between them. It can similarly be mathematically proven that in a system of particles interacting by central forces, the motion can always be separated into the motion of the particles moving around the center-of-mass, and the motion of the entire system along the center of mass.

    Finally, we complete the Kepler 2-body problem by plotting the two particles in a coordinate system where the larger body represents the origin of the coordinate system. This is the coordinate system of the original Kepler problem, and the coordinate system where Kepler's Laws (wikipedia) are defined. To do this, we again use the exact same set of points generated in the simulation, and subtract the position of the larger body at each time step. So

    x(A,ref), y(A,ref), z(A,ref) = position of object A, in the coordinate system where the origin is object ref.

    x(A,ref) = x(A) - x(A) = 0.0
    y(A,ref) = y(A) - y(A) = 0.0
    Object A resides at the origin in this coordinate system.

    x(B,ref) = x(B) - x(A)
    y(B,ref) = y(B) - y(A)

    Plotting these results for each point in the simulation generates the movie

    We also plot the position of the center-of-mass in this new coordinate system, which appears to orbit the main body.

    For this simulation, I've chosen 1 AU as the unit of distance, and 1 year as the unit of time. The large body is 10 solar masses and the smaller is 1 solar mass. I've chosen the starting positions and velocities so that the orbits are sufficiently elliptical that you can see motions relative to the center-of-mass. Competent scientists could use the information in these plots to estimate such additional parameters as speeds at apoapsis and periapsis. Do EU 'theorists' know how to determine this information from the simulation?

    To illustrate the flexibility of these types of programs, I can also generate simulations of electromagnetic systems. Here I have a plot of some electrons and nuclei (hydrogen, deuterium, tritium, helium-3, helium-4) in a configuration where an external electric (yellow arrow) and magnetic field (blue arrow) are at right angles to each other.

    This visualization was generated using the POVray renderer. Currently, electromagnetic interactions between particles are not included in the simulation run, but will be added in the future.

    Note that I have placed Creative Commons notices (BY-NC) on these movies for a reason (see Setterfield Again. ). Any website which hosts these movies must include a proper credit to me and a link back to this page.

    These are some opening samples of what can be done. But this is just the tip of the iceberg of what can be done with this tool.

    • An introduction to N-Body simulations
    • The physics and mathematics of N-Body simulations
    • Some features of my particular implementation
    • How do we test/validate the simulations?
    • Lagrange Points
    • Lagrange Points with an extra planet
    • Three-body systems such as Sun-Earth-Moon
    • Three-body systems such as Sun-Earth-Jupiter
    • Solar system model (Sun + eight planets)
    • Various particle and electromagnetic field configurations
    • The “Tatooine system“, Kepler 16AB (NASA, Exoplanet Catalog)
    • Lorentz Slingshot (see 365 days of astronomy podcast)
    • Velikovsky's Venus (what would happen to a planet launched out of Jupiter?)
    • The Newton-Coulomb problem, or what if the Sun and planets had significant electric charge?
    • Particle-in-Cell (PIC) support to accommodate more realistic plasma models (wikipedia). This moves the code in a direction suitable to run a version of Peratt's galaxy simulations. Will I be generating testable models while Siggy_G is still generating excuses?

    And Still the Pseudo-Scientists Can't Meet the Standards

    One of the popular whines from pseudo-scientists is that they could produce worthy results if they had the same funding as the mainstream scientists.

    Gee, I'm sure a lot of practicing scientists would like to have that kind of funding as well, in the rare cases where it exists.

    Most scientists are not paid to do research full time. I certainly am not. Often research is a part-time project, done between teaching classes or doing support work (writing analysis software, archiving datasets, visualization production) for other science projects.

    Note that I did not have any grant to work on this project. I didn't have to hire an army, or even one programmer, beyond myself. Beyond the computer and operating system, I used open-source software available to anyone. This was done on commercial grade computer systems (laptop & desktop), with the laptop used primarily for code development and testing, and the desktop system used for running the more heavy-duty processing of full simulations.

    The code was developed in my spare time (though some components of the development have already found application in my day job).


    How to project galaxy data in x y z coordinates? - Astronomy

    I am in 10th grade and consider myself a fairly competent amateur astronomer. I am doing a project on distribution of certain objects using some of the newish redshift data, I have been looking and looking for a formula that I could use to change RA, DEC and distance to X,Y and Z. I want to do this so that i can measure distances from one object to the next which I can't do unless i put the objects on a coordinate grid.

    In mathematics and physics, we don't only use cartesian coordinates (x,y, and z). For some kind of problems we use other kinds of coordinate systems. In your case, the natural coordinate system to use would be 'spherical coordinates'. You can look at the web page: http://mathworld.wolfram.com/SphericalCoordinates.html, and I will tell you how to relate these coordinates to your problem. (by the way, if you don't know this webpage yet, it is a reliable source of information about math).

    In spherical coordinates, you have 3 coordinates, just like you have x, y and z in cartesian coordinates. These coordinates are: r, which is the distance between the origin and a point, and then two angles that work just like latitude and longitude on Earth, or as RA and DEC in the sky.

    So if you want to apply these to your objects you have to set r as the distance to the object (which you get from its redshift, as you know). Then 'theta' is the azimuthal angle, which means it is your RA. But watch out! RA is generally given in units of hours, minutes, and seconds, which you will have to convert to decimal degrees. And finally, the 'phi' angle is your DEC. But once again, if your declinations are in degrees, minutes and seconds, you will need to convert them to decimal degrees. DEC goes from 90 degrees at the north celestial pole to -90 at the south celestial pole, but you need to have 'phi' from 0 to 180 degrees, so you should do " phi=90degrees-DEC ".

    Now all your objects should be described by these coordinates. But to measure distances between them, the easiest for you might be to convert these coordinates to standard x,y, and z coordinates, and then calculate distance as usual. If you go to the web page I told you about before, you will get formulas to go from r, theta and phi to x, y, and z (equations 4, 5, and 6 as they number them). This should do the trick for you!

    Good luck with your project!

    This page updated on June 27, 2015

    About the Author

    Amelie Saintonge

    Amelie is working on ways to detect the signals of galaxies from radio maps.


    Doin' Real Science: Simulating Particles

    Many times on this blog, I challenge the purveyors of various pseudo-sciences to demonstrate their claims to the same standards that mainstream scientists must meet. One of these criteria is that real science must make testable predictions. In the physical sciences, this usually means numerical predictions which can be compared to actual measurements, whether in situ or observations. When they're not evading the question completely, pseudo-scientists and their supporters construct all manner of excuses why they should be exempt from these standards.

    To make the numerical predictions, modern scientists usually rely on computers to do the repeated mathematical calculations (sometimes called 'number crunching') that these types of predictions require. Once the programs are written, the computer can perform them tirelessly.

    One tool in many researchers toolbox is some type of particle simulation, used to study interactions and motions of many particles. Codes written for an arbitrary number of particles, N, are usually called N-body codes.

    I first wrote my own N-body code in 1979 in AppleSoft BASIC on an Apple II computer (wikipedia), in my early college years (the FIRST time I tried to complete my degree). Back then, computational libraries for solving differential equations existed only for large scientific systems. On smaller computers, you had to write your own, but plenty of documentation of the techniques was available. One of the most popular techniques for solving these types of differential equations are the Runge-Kutta integrators (wikipedia).

    Today, computer hardware is readily available, as are numerical processing libraries. Numerical N-body solvers are simple enough that a competent programmer in high school could write their own.

    So for this project, I've updated my N-body code. I've written the new version in Python (v2.7) and using the Python numerical libraries: numpy, scipy, matplotlib, and others (for more info on the installation). I've also written an interface to generate rendered output using the POVray rendering package.

    In testing these types of codes, we usually perform runs on simple configurations where the solution is available in a known form. Here's the results from a 2-body gravitational run, the Kepler Problem (wikipedia). The program allows me to generate tables of particle positions once, after which I can read these tables and construct different types of plots. Here I plot the results of the run in reference frame in which the velocities and positions are originally defined.

    Note that for the two objects, there is a point that lies on a line between the two particles that represents the center-of-mass (wikipedia), or barycenter, of the two particles. In this configuration, the center-of-mass moves in a straight line and at a uniform speed. This is a property of the center-of-mass that can be mathematically proven for a system of particles interacting by a central force (where the force acts along the line between the particles).

    Now let's define some of our mathematical terms:

    x(A),y(A), z(A) = x, y, z coordinates of object A in the original coordinate system

    x(CM), y(CM), z(CM) = x, y, z coordinates of center-of-mass in original coordinate system

    Next we can plot the exact same data points, but this time we re-compute their positions in a reference frame where the center-of-mass is not moving. We do this by computing the center-of-mass of the two particles at each time step (x(CM), y(CM), z(CM)) of the simulation, and then compute the position of the particles relative to that center-of-mass:

    x(A,CM), y(A,CM), z(A,CM) = coordinates of object A in the CM coordinate system, where the center-of-mass is translated to the origin, (0,0,0)

    x(A,CM) = x(A) - x(CM)
    y(A,CM) = y(A) - x(CM)

    For simplicity in plotting, I've restricted the simulation run to motion in 2-dimensions, x and y, with z=0. For this type of problem, it can also be mathematically proven that if the particles start with all positions and velocities in the z-direction equal to zero, they will remain zero at future times.

    Now we can plot the particle trajectories in the CM reference frame, which generates the following plot.

    Now the two particles revolve around their mutual center-of-mass (position 0,0 in the plot), always keeping the center-of-mass on the line between them. It can similarly be mathematically proven that in a system of particles interacting by central forces, the motion can always be separated into the motion of the particles moving around the center-of-mass, and the motion of the entire system along the center of mass.

    Finally, we complete the Kepler 2-body problem by plotting the two particles in a coordinate system where the larger body represents the origin of the coordinate system. This is the coordinate system of the original Kepler problem, and the coordinate system where Kepler's Laws (wikipedia) are defined. To do this, we again use the exact same set of points generated in the simulation, and subtract the position of the larger body at each time step. So

    x(A,ref), y(A,ref), z(A,ref) = position of object A, in the coordinate system where the origin is object ref.

    x(A,ref) = x(A) - x(A) = 0.0
    y(A,ref) = y(A) - y(A) = 0.0
    Object A resides at the origin in this coordinate system.

    x(B,ref) = x(B) - x(A)
    y(B,ref) = y(B) - y(A)

    Plotting these results for each point in the simulation generates the movie

    We also plot the position of the center-of-mass in this new coordinate system, which appears to orbit the main body.

    For this simulation, I've chosen 1 AU as the unit of distance, and 1 year as the unit of time. The large body is 10 solar masses and the smaller is 1 solar mass. I've chosen the starting positions and velocities so that the orbits are sufficiently elliptical that you can see motions relative to the center-of-mass. Competent scientists could use the information in these plots to estimate such additional parameters as speeds at apoapsis and periapsis. Do EU 'theorists' know how to determine this information from the simulation?

    To illustrate the flexibility of these types of programs, I can also generate simulations of electromagnetic systems. Here I have a plot of some electrons and nuclei (hydrogen, deuterium, tritium, helium-3, helium-4) in a configuration where an external electric (yellow arrow) and magnetic field (blue arrow) are at right angles to each other.

    This visualization was generated using the POVray renderer. Currently, electromagnetic interactions between particles are not included in the simulation run, but will be added in the future.

    Note that I have placed Creative Commons notices (BY-NC) on these movies for a reason (see Setterfield Again. ). Any website which hosts these movies must include a proper credit to me and a link back to this page.

    These are some opening samples of what can be done. But this is just the tip of the iceberg of what can be done with this tool.

    • An introduction to N-Body simulations
    • The physics and mathematics of N-Body simulations
    • Some features of my particular implementation
    • How do we test/validate the simulations?
    • Lagrange Points
    • Lagrange Points with an extra planet
    • Three-body systems such as Sun-Earth-Moon
    • Three-body systems such as Sun-Earth-Jupiter
    • Solar system model (Sun + eight planets)
    • Various particle and electromagnetic field configurations
    • The “Tatooine system“, Kepler 16AB (NASA, Exoplanet Catalog)
    • Lorentz Slingshot (see 365 days of astronomy podcast)
    • Velikovsky's Venus (what would happen to a planet launched out of Jupiter?)
    • The Newton-Coulomb problem, or what if the Sun and planets had significant electric charge?
    • Particle-in-Cell (PIC) support to accommodate more realistic plasma models (wikipedia). This moves the code in a direction suitable to run a version of Peratt's galaxy simulations. Will I be generating testable models while Siggy_G is still generating excuses?

    And Still the Pseudo-Scientists Can't Meet the Standards

    One of the popular whines from pseudo-scientists is that they could produce worthy results if they had the same funding as the mainstream scientists.

    Gee, I'm sure a lot of practicing scientists would like to have that kind of funding as well, in the rare cases where it exists.

    Most scientists are not paid to do research full time. I certainly am not. Often research is a part-time project, done between teaching classes or doing support work (writing analysis software, archiving datasets, visualization production) for other science projects.

    Note that I did not have any grant to work on this project. I didn't have to hire an army, or even one programmer, beyond myself. Beyond the computer and operating system, I used open-source software available to anyone. This was done on commercial grade computer systems (laptop & desktop), with the laptop used primarily for code development and testing, and the desktop system used for running the more heavy-duty processing of full simulations.

    The code was developed in my spare time (though some components of the development have already found application in my day job).


    Axisymmetric Models

    In describing axisymmetric galaxy models we may use cylindrical coordinates (R,phi,z) , where R and phi are polar coordinates in the equatorial plane, and z is the coordinate perpendicular to that plane. If the mass distribution is axisymmetric, the potential is also axisymmetric: The equations of motion in cylindrical coordinates are and where Eq. 3 follows because Phi does not depend on phi thus the azimuthal force is always zero.

    Classical integrals

    Motion in a time-independent axisymmetric potential conserves two classical integrals of motion: the total energy and the z-component of the angular momentum

    The meridional plane

    In terms of L_z , the equations of motion may be rewritten: where Phi_e is the effective potential , given by These equations describe the motion in the meridional plane , which rotates about the z-axis at an angular rate of L_z/R^2 .

    Numerical calculations show that most orbits in `plausible' axisymmetric potentials are not completely characterized by the energy and the z-component of angular momentum, implying that most orbits have a third integral! No general expressions for this non-classical integral are known, although for nearly spherical systems it is approximated by |L| , the magnitude of the total angular momentum.

    Two-Integral Models

    Despite the existence of third integrals in most axisymmetric potentials, it is reasonable to ask if models based on just two integrals can possibly describe real galaxies. In such models the distribution function has the form One immediate result is that the distribution function depends on the R and z components of the velocity only through the combination v_R^2+v_z^2 thus in all two-integral models the velocity dispersions in the R and z directions must be equal: Note that this equality does not hold for the Milky Way thus our galaxy cannot be described by a two-integral model. For other galaxies, however, the situation is not so clear, and a two-integral model may suffice.

    Distribution functions

    Calculating rho(R,z) from f(E,L_z) : Much as in the spherically symmetric case described last time, one may adopt a plausible guess for f(E,L_z) , derive the corresponding density rho(R,Phi) , and solve Poisson's equation for the gravitational potential. Perhaps the most interesting example of this approach is a series of scale-free models with r^-2 density profiles (Toomre 1982) however, these models are somewhat implausible in that the density vanishes along the minor axis.

    Calculating f(E,L_z) from rho(R,z) : Conversely, one may try to find a distribution function which generates a given rho(R,z) . This problem is severely underconstrained because a star contributes equally to the total density regardless of its sense of motion about the z-axis formally, if f(E,L_z) yields the desired rho(R,z) , then so does f(E,L_z)+f_o(E,L_z) , where f_o(E,L_z) = -f_o(E,-L_z) is any odd function of L_z . The odd part of the distribution function can be found from the kinematics since it determines the net streaming motion in the phi direction (BT87, Chapter 4.5.2(a)).

    Even if kinematic data is available, this approach is not practical for modeling observed galaxies. The reason is that the transformation from density (and streaming velocity) to distribution function is unstable small errors in the input data can produce huge variations in the results ( e.g. Dejonghe 1986, BT87). A few two-integral distribution functions are known for analytic density distributions, and recent developments have removed some mathematical obstacles to the construction of more models (Hunter & Qian 1993).

    An ` unbelievably simple ' and analytic distribution function exists for the mass distribution which generates the axisymmetric logarithmic potential (Evans 1993). This potential, introduced to describe the halos of galaxies (Binney 1981, BT87, Chapter 2.2.2), has the form where v_0 is the velocity scale, R_c is the core scale radius, and q is the flattening of the potential (the mass distribution is even flatter). The corresponding distribution function has the form where A , B , and C are constants. Evans also divides this distribution function up into `luminous' and `dark' components to obtain models of luminous galaxies embedded in massive dark halos his results illustrate a number of important points, including the non-gaussian line profiles which result when the luminous distribution function is anisotropic.

    Jeans-equation models

    Since we cannot (yet) construct distribution functions for real galaxies, consider the simpler problem of modeling observed systems using the Jeans equations. If we assume that the underlying distribution function depends only on E and L_z we can simplify the Jeans equations, since the radial and vertical dispersions must be everywhere equal thus At each R one can calculate the mean squared velocity in the R direction by integrating Eq. 12 inward from z = infinity the mean squared velocity in the phi then follows from Eq. 11.

    The Jeans equations do not tell how to divide up the azimuthal velocities into streaming and random components. One choice which is popular, although lacking a physical basis, is where k is a free parameter (Satoh 1980). The dispersion in the phi direction is then Note that if k = 1 the velocity dispersion is isotropic and the excess azimuthal motion is entirely due to rotation, while for k < 1 the azimuthal dispersion exceeds the radial dispersion.

    Application to elliptical galaxies

    1. Observe the surface brightness Sigma(x',y')
    2. Deproject to get the stellar density nu(R,z) , assuming an inclination angle
    3. Compute the potential Phi(R,z) , assuming a constant mass-to-light ratio
    4. Solve the Jeans equations for the mean squared velocities
    5. Divide the azimuthal motion into streaming and random parts
    6. Project the velocities back on to the plane of the sky to get the line-of-sight velocity and dispersion v_los(x',y') and sigma_los(x',y')
    7. Compare the predicted and observed kinematics.
    • Isotropic oblate rotators ( k = 1 ) do not fit
    • Some galaxies ( e.g. NGC 1052) are well-fit by two-integral Jeans equation models
    • The models predict major-axis velocity dispersions in excess of those observed in most galaxies
    • Consequently, most of the galaxies must have (and `use') a third integral, or are in fact triaxial .

    Alas, without an analytic expression for the third integral the machinery developed so far cannot be extended to model real galaxies in more detail. The best available methods are very similar to those used for triaxial systems, the subject of the next lecture.


    The Tycho Catalog Skymap - Version 2.0

    The stars are plotted as gaussian point-spread functions (PSF) so the size and amplitude of the stars corresponds to their relative intensity. The stars are also elongated in Right Ascension (celestial longitude) based on declination (celestial latitude) so stars in the polar regions will still be round when projected on a sphere. Stars fainter than the threshold magnitude, usually selected as 5th magnitude, have their magnitude-intensity curve adjusted so they appear brighter than they really are. This makes the band of the Milky Way more visible. Stellar colors are assigned based on B and V magnitudes (B and V are stellar magnitudes measured through different filters). If Johnson B and V magnitudes are unavailable, Tycho B and V magnitudes are used instead. From these, an effective stellar temperature is derived using the algorithms described in Flower (ApJ 469, 355 1996). Corrections were noted from Siobahn Morgan (UNI). The effective temperature was then converted to CIE tristimulus X,Y,Z triples assuming a black-body emission distribution. The X,Y,Z values are then converted to red-green-blue color pixels. About 2.4 million stars are plotted, but many may be below the pixel intensity resolution. The three most conspicuously missing objects on these maps are the Andromeda galaxy (M31) and the two Magellanic Clouds.

    Changes from the first version #3442, The Tycho Catalog Skymap: The star generation algorithm now favors use of the Johnson magnitudes when available. This improves the star colors over the previous method. The star intensity profiles are also slightly modified to make the cores brighter with a faster intensity falloff. We have also set the color standard to SMPTE with a gamma of 1.8.

    Update: This skymap has been revised. The newer version is available at Deep Star Maps.


    Watch the video: Νέες εβδομαδιαίες προσφορές στα σούπερ μάρκετ Γαλαξίας! Ισχύουν μέχρι και την Κυριακή 65! (May 2022).