We are serving many small, medium and large scale industries by developing the industrial simulators for their machine,plants and parts.
Companies can get their simulator developed for particular machine and plants. The employee can get practical training of the machine without wasting valuable raw material. Want to download on your computer's hard drive? Click to download link. Virtual Lab. Are you Parents? Visit Our Online Store Scientific multimedia technology to business and education system.
It makes it easy to apply statistical methods and derive outputs. This software is privately owned by American Corporation and was released in It specializes in providing mathematical computing solutions. Simulation software helps in predicting the action of a system. You can evaluate a new design, check for problems and test a model under various conditions to get output.
It is comparatively less expensive to create and simulate models than building and testing prototypes. Hence, we can easily test different designs before building one in hardware. We can further connect and integrate the design fully in the system. It provides the user with time-based simulation, event-based simulation and physical-systems simulation.
Its recent achievements include providing significant enhancements in the field of animation and design. This software provides high-level flexibility and functionality to meet the needs of the user. It does not work on a trial and error or guessing basis; the user gets the right output in one try. It provides specific and unique events every time and they are flexible and activity-based. Arena has been the top simulation software for the last 30 years.
This simulation software makes it possible to optimize and study any system in any industry. It is in a category of discrete event simulation tools developed by Flexisim Software Products.
It was released in in the USA. It uses little or no computer code. Most of the work is done with arrays or drop-down lists and property windows to customize user-required models. Flexisim supports user-oriented design. You can build models by dragging and dropping already defined 3D objects. This software provides simulation and modeling to improve productivity across different areas. Simulations Plus provides solutions for biochemical, pharmaceutical, chemical, cosmetics and herbicide industries.
This software is a computer-aided engineering application which can be run on Microsoft Windows. It was released in mids and was developed on the concept of discrete network application. The user can choose from 1D, 2D or 3D as per requirements and obtain results as needed. The simulations require parameters and build libraries with integration. The software includes signal blocks, mechanics, fluid power and power transmission.
It is used for designing, analyzing and modeling complex systems and transforms them into simpler solutions. It offers ready-to-use simulation models and pre-configured components. Computer simulations are often used either because the original model itself contains discrete equations—which can be directly implemented in an algorithm suitable for simulation—or because the original model consists of something better described as rules of evolution than as equations.
There are cases in which different results can be obtained as a result of variations in any of these particulars. More broadly, we can think of computer simulation as a comprehensive method for studying systems.
In this broader sense of the term, it refers to an entire process. This process includes choosing a model; finding a way of implementing that model in a form that can be run on a computer; calculating the output of the algorithm; and visualizing and studying the resultant data. The method includes this entire process—used to make inferences about the target system that one tries to model—as well as the procedures used to sanction those inferences.
This is more or less the definition of computer simulation studies in Winsberg They make use of a variety of techniques to draw inferences from these numbers. Simulations make creative use of calculational techniques that can only be motivated extra-mathematically and extra-theoretically.
As such, unlike simple computations that can be carried out on a computer, the results of simulations are not automatically reliable. Much effort and expertise goes into deciding which simulation results are reliable and which are not. Both of the above definitions take computer simulation to be fundamentally about using a computer to solve, or to approximately solve, the mathematical equations of a model that is meant to represent some system—either real or hypothetical.
On this approach, a simulation is any system that is believed, or hoped, to have dynamical behavior that is similar enough to some other system such that the former can be studied to learn about the latter. For example, if we study some object because we believe it is sufficiently dynamically similar to a basin of fluid for us to learn about basins of fluid by studying the it, then it provides a simulation of basins of fluid.
Humphreys revised his definition of simulation to accord with the remarks of Hartmann and Hughes as follows:.
Note that Humphreys is here defining computer simulation, not simulation generally, but he is doing it in the spirit of defining a compositional term. In most philosophical discussions of computer simulation, the more useful concept is the one defined in 1.
The exception is when it is explicitly the goal of the discussion to understand computer simulation as an example of simulation more generally see section 5. Another nice example, which is discussed extensively in Dardashti et al.
Physicist Bill Unruh noted that in certain fluids, something akin to a black hole would arise if there were regions of the fluid that were moving so fast that waves would have to move faster than the speed of sound something they cannot do in order to escape from them Unruh Such regions would in effect have sonic event horizons.
For some time, this proposal was viewed as nothing more than a clever idea, but physicists have recently come to realize that, using Bose-Einstein condensates, they can actually build and study dumb holes in the laboratory. It is clear why we should think of such a setup as a simulation: the dumb hole simulates the black hole. Instead of finding a computer program to simulate the black holes, physicists find a fluid dynamical setup for which they believe they have a good model and for which that model has fundamental mathematical similarities to the model of the systems of interest.
They observe the behavior of the fluid setup in the laboratory in order to make inferences about the black holes. The point, then, of the definitions of simulation in this section is to try to understand in what sense computer simulation and these sorts of activities are species of the same genus.
We might then be in a better situation to understand why a simulation in the sense of 1. We will come back to this in section 5. Barberousse et al. It is not the case that the computer as a material object and the target system follow the same differential equations.
A good reference about simulations that are not computer simulations is Trenholme Two types of computer simulation are often distinguished: equation-based simulations and agent-based or individual-based simulations. Equation-based simulations are most commonly used in the physical sciences and other sciences where there is governing theory that can guide the construction of mathematical models based on differential equations. Equation based simulations can either be particle-based, where there are n many discrete bodies and a set of differential equations governing their interaction, or they can be field-based, where there is a set of equations governing the time evolution of a continuous medium or field.
An example of the former is a simulation of galaxy formation, in which the gravitational interaction between a finite collection of discrete bodies is discretized in time and space. An example of the latter is the simulation of a fluid, such as a meteorological system like a severe storm. Here the system is treated as a continuous medium—a fluid—and a field representing its distribution of the relevant variables in space is discretized in space and then updated in discrete intervals of time.
Agent-based simulations are most common in the social and behavioral sciences, though we also find them in such disciplines as artificial life, epidemiology, ecology, and any discipline in which the networked interaction of many individuals is being studied. Agent-based simulations are similar to particle-based simulations in that they represent the behavior of n-many discrete individuals.
But unlike equation-particle-based simulations, there are no global differential equations that govern the motions of the individuals. Rather, in agent-based simulations, the behavior of the individuals is dictated by their own local rules. The individuals were divided into two groups in the society e. Each square on the board represented a house, with at most one person per house. Happy agents stay where they are, unhappy agents move to free locations.
In section 2. But some simulation models are hybrids of different kinds of modeling methods. Multiscale simulation models, in particular, couple together modeling elements from different scales of description. A good example of this would be a model that simulates the dynamics of bulk matter by treating the material as a field undergoing stress and strain at a relatively coarse level of description, but which zooms into particular regions of the material where important small scale effects are taking place, and models those smaller regions with relatively more fine-grained modeling methods.
Such methods might rely on molecular dynamics, or quantum mechanics, or both—each of which is a more fine-grained description of matter than is offered by treating the material as a field. Multiscale simulation methods can be further broken down into serial multiscale and parallel multiscale methods.
The more traditional method is serial multi-scale modeling. The idea here is to choose a region, simulate it at the lower level of description, summarize the results into a set of parameters digestible by the higher level model, and pass them up to into the part of the algorithm calculating at the higher level.
Serial multiscale methods are not effective when the different scales are strongly coupled together. When the different scales interact strongly to produce the observed behavior, what is required is an approach that simulates each region simultaneously.
This is called parallel multiscale modeling. Sub-grid modeling refers to the representation of important small-scale physical processes that occur at length-scales that cannot be adequately resolved on the grid size of a particular simulation. This is done by adding to the large-scale motion an eddy viscosity that characterizes the transport and dissipation of energy in the smaller-scale flow—or any such feature that occurs at too small a scale to be captured by the grid.
This is as opposed to other processes—e. Examples of parameterization in climate simulations include the descent rate of raindrops, the rate of atmospheric radiative transfer, and the rate of cloud formation.
For example, the average cloudiness over a km 2 grid box is not cleanly related to the average humidity over the box.
Nonetheless, as the average humidity increases, average cloudiness will also increase—hence there could be a parameter linking average cloudiness to average humidity inside a grid box. Even though modern-day parameterizations of cloud formation are more sophisticated than this, the basic idea is well illustrated by the example.
The use of sub-grid modeling methods in simulation has important consequences for understanding the structure of the epistemology of simulation. This will be discussed in greater detail in section 4. Sub-grid modelling methods can be contrasted with another kind of parallel multiscale model where the sub-grid algorithms are more theoretically principled, but are motivated by a theory at a different level of description. In the example of the simulation of bulk matter mentioned above, for example, the algorithm driving the smaller level of description is not built by the seat-of-the-pants.
The algorithm driving the smaller level is actually more theoretically principled than the higher level in the sense that the physics is more fundamental: quantum mechanics or molecular dynamics vs. These kinds of multiscale models, in other words, cobble together the resources of theories at different levels of description. So they provide for interesting examples that provoke our thinking about intertheoretic relationships, and that challenge the widely-held view that an inconsistent set of laws can have no models.
In the scientific literature, there is another large class of computer simulations called Monte Carlo MC Simulations. MC simulations are computer algorithms that use randomness to calculate the properties of a mathematical model and where the randomness of the algorithm is not a feature of the target model. Many philosophers of science have deviated from ordinary scientific language here and have shied away from thinking of MC simulations as genuine simulations.
This shows that MC simulations do not fit any of the above definitions aptly. On the other hand, the divide between philosophers and ordinary language can perhaps be squared by noting that MC simulations simulate an imaginary process that might be used for calculating something relevant to studying some other process.
If I do the MC simulation mentioned in the last paragraph, I am simulating the process of randomly dropping objects into a square, but what I am modeling is a planetary orbit. This is the sense in which MC simulations are simulations, but they are not simulations of the systems they are being used to study. However, as Beisbart and Norton point out, some MC simulations viz. There are three general categories of purposes to which computer simulations can be put. Simulations can be used for heuristic purposes, for the purpose of predicting data that we do not have, and for generating understanding of data that we do already have.
Under the category of heuristic models, simulations can be further subdivided into those used to communicate knowledge to others, and those used to represent information to ourselves. When Watson and Crick played with tin plates and wire, they were doing the latter at first, and the former when they showed the results to others.
When the army corps built the model of the San Francisco Bay to convince the voting population that a particular intervention was dangerous, they were using it for this kind of heuristic purpose. Computer simulations can be used for both of these kinds of purposes—to explore features of possible representational structures; or to communicate knowledge to others.
Another broad class of purposes to which computer simulations can be put is in telling us about how we should expect some system in the real world to behave under a particular set of circumstances.
Loosely speaking: computer simulation can be used for prediction. We can use models to predict the future, or to retrodict the past; we can use them to make precise predictions or loose and general ones. With regard to the relative precision of the predictions we make with simulations, we can be slightly more fine-grained in our taxonomy. There are a Point predictions: Where will the planet Mars be on October 21st, ? What scaling law emerges in these kinds of systems?
What is the fractal dimension of the attractor for systems of this kind? Finally, simulations can be used to understand systems and their behavior. If we already have data telling us how some system behaves, we can use computer simulation to answer questions about how these events could possibly have occurred; or about how those events actually did occur. When thinking about the topic of the next section, the epistemology of computer simulations, we should also keep in mind that the procedures needed to sanction the results of simulations will often depend, in large part, on which of the above kind of purpose or purposes the simulation will be put to.
As computer simulation methods have gained importance in more and more disciplines, the issue of their trustworthiness for generating new knowledge has grown, especially when simulations are expected to be counted as epistemic peers with experiments and traditional analytic theoretical methods.
The relevant question is always whether or not the results of a particular computer simulation are accurate enough for their intended purpose. If a simulation is being used to forecast weather, does it predict the variables we are interested in to a degree of accuracy that is sufficient to meet the needs of its consumers? If a simulation of the atmosphere above a Midwestern plain is being used to understand the structure of a severe thunderstorm, do we have confidence that the structures in the flow—the ones that will play an explanatory role in our account of why the storm sometimes splits in two, or why it sometimes forms tornados—are being depicted accurately enough to support our confidence in the explanation?
If a simulation is being used in engineering and design, are the predictions made by the simulation reliable enough to sanction a particular choice of design parameters, or to sanction our belief that a particular design of airplane wing will function?
More generally, how can the claim that a simulation is good enough for its intended purpose be evaluated? These are the central questions of the epistemology of computer simulation EOCS. Given that confirmation theory is one of the traditional topics in philosophy of science, it might seem obvious that the latter would have the resources to begin to approach these questions.
Winsberg , however, argued that when it comes to topics related to the credentialing of knowledge claims, philosophy of science has traditionally concerned itself with the justification of theories, not their application.
Most simulation, on the other hand, to the extent that it makes use of the theory, tends to make use of the well-established theory. EOCS, in other words, is rarely about testing the basic theories that may go into the simulation, and most often about establishing the credibility of the hypotheses that are, in part, the result of applications of those theories. Winsberg argued that, unlike the epistemological issues that take center stage in traditional confirmation theory, an adequate EOCS must meet three conditions.
In particular it must take account of the fact that the knowledge produced by computer simulations is the result of inferences that are downward , motley , and autonomous.
EOCS must reflect the fact that in a large number of cases, accepted scientific theories are the starting point for the construction of computer simulation models and play an important role in the justification of inferences from simulation results to conclusions about real-world target systems.
EOCS must take into account that simulation results nevertheless typically depend not just on theory but on many other model ingredients and resources as well, including parameterizations discussed above , numerical solution methods, mathematical tricks, approximations and idealizations, outright fictions, ad hoc assumptions, function libraries, compilers and computer hardware, and perhaps most importantly, the blood, sweat, and tears of much trial and error.
EOCS must take into account the autonomy of the knowledge produced by simulation in the sense that the knowledge produced by simulation cannot be sanctioned entirely by comparison with observation. Simulations are usually employed to study phenomena where data are sparse. In these circumstances, simulations are meant to replace experiments and observations as sources of data about the world because the relevant experiments or observations are out of reach, for principled, practical, or ethical reasons.
Parker has made the point that the usefulness of these conditions is somewhat compromised by the fact that it is overly focused on simulation in the physical sciences, and other disciplines where simulation is theory-driven and equation-based.
This seems correct. In the social and behavioral sciences, and other disciplines where agent-based simulation see 2. For instance, some social scientists who use agent-based simulation pursue a methodology in which social phenomena for example an observed pattern like segregation are explained, or accounted for, by generating similar looking phenomena in their simulations Epstein and Axtell ; Epstein But this raises its own sorts of epistemological questions.
What exactly has been accomplished, what kind of knowledge has been acquired, when an observed social phenomenon is more or less reproduced by an agent-based simulation?
Does this count as an explanation of the phenomenon? A possible explanation? It is also fair to say, as Parker does , that the conditions outlined above pay insufficient attention to the various and differing purposes for which simulations are used as discussed in 2. If we are using a simulation to make detailed quantitative predictions about the future behavior of a target system, the epistemology of such inferences might require more stringent standards than those that are involved when the inferences being made are about the general, qualitative behavior of a whole class of systems.
Indeed, it is also fair to say that much more work could be done in classifying the kinds of purposes to which computer simulations are put and the constraints those purposes place on the structure of their epistemology.
Frigg and Reiss argued that none of these three conditions are new to computer simulation. Indeed, they argued that computer simulation could not possibly raise new epistemological issues because the epistemological issues could be cleanly divided into the question of the appropriateness of the model underlying the simulation, which is an issue that is identical to the epistemological issues that arise in ordinary modeling, and the question of the correctness of the solution to the model equations delivered by the simulation, which is a mathematical question, and not one related to the epistemology of science.
On the first point, Winsberg b replied that it was the simultaneous confluence of all three features that was new to simulation. We will return to the second point in section 4. Some of the work on the EOCS has developed analogies between computer simulation in order to draw on recent work in the epistemology of experiment, particularly the work of Allan Franklin; see the entry on experiments in physics.
In his work on the epistemology of experiment, Franklin , identified a number of strategies that experimenters use to increase rational confidence in their results. Weissart and Parker a argued for various forms of analogy between these strategies and a number of strategies available to simulationists to sanction their results. The most detailed analysis of these relationships is to be found in Parker a, where she also uses these analogies to highlight weaknesses in current approaches to simulation model evaluation.
Hacking intended to convey two things with this slogan. The first was a reaction against the unstable picture of science that comes, for example, from Kuhn. Hacking suggests that experimental results can remain stable even in the face of dramatic changes in the other parts of sciences.
Some of the techniques that simulationists use to construct their models get credentialed in much the same way that Hacking says that instruments and experimental procedures and methods do; the credentials develop over an extended period of time and become deeply tradition-bound.
Perhaps a better expression would be that they carry their own credentials. This provides a response to the problem posed in 4. Drawing inspiration from another philosopher of experiment Mayo , Parker b suggests a remedy to some of the shortcomings in current approaches to simulation model evaluation.
That is, what warrants our concluding that the simulation would be unlikely to give the results that it in fact gave, if the hypothesis of interest were false b, ? Parker believes that too much of what passes for simulation model evaluation lacks rigor and structure because it:.
Practitioners of simulation, particularly in engineering contexts, in weapons testing, and in climate science, tend to conceptualize the EOCS in terms of verification and validation. Verification is said to be the process of determining whether the output of the simulation approximates the true solutions to the differential equations of the original model. Validation , on the other hand, is said to be the process of determining whether the chosen model is a good enough representation of the real-world system for the purpose of the simulation.
The literature on verification and validation from engineers and scientists is enormous and it is beginning to receive some attention from philosophers. Verification can be divided into solution verification and code verification. The former verifies that the output of the intended algorithm approximates the true solutions to the differential equations of the original model. The latter verifies that the code, as written, carries out the intended algorithm.
Code verification has been mostly ignored by philosophers of science; probably because it has been seen as more of a problem in computer science than in empirical science—perhaps a mistake. Though this method can of course help to make case for the results of a computer simulation, it is by itself inadequate , since simulations are often used precisely because analytic solution is unavailable for regions of solution space that are of interest.
Other indirect techniques are available: the most important of which is probably checking to see whether and at what rate computed output converges to a stable solution as the time and spatial resolution of the discretization grid gets finer. The principal strategy of validation involves comparing model output with observable data. Again, of course, this strategy is limited in most cases, where simulations are being run because observable data are sparse.
0コメント