advertisement
MASIGNCLEAN104

Photos Reveal The Damage Inside Notre Dame Cathedral After The Fire

advertisement

New Super Computers In general, supercomputers do "above normal computing," in the sense that superstars are "above" other stars. And, yes, you can build your own super personal computer. This will be explained in the next chapter on computing "Grid". Note that "super-anything" does not mean the fastest or absolute best. Although all contemporary personal computers appear in tens or hundreds of megaflops (millions of calculations per second), they still cannot solve certain problems quickly enough. Only in the early 2000s did the supercomputing arena move into the gigaflops region. That is, you can have a computer that calculates problems with the speed of several gigaflops, but doing the same calculation at "only" 100 megaflop and in acceptable time, is also almost impossible. That is, with a supercomputer you can do calculations within a time limit or session that is acceptable to the user. Namely: YOU. To make it stronger: You can do anything in real time (meaning: now, immediately, immediately) with supercomputers that cannot be done in your life with one PC. So, certain tasks, in some cases, are not possible in real time on a PC. (2) For example, you will need one PC more than a few days (weeks) to calculate the weather map. Produce weather predictions several days when the map is complete. That doesn't sound like a prediction, right? The super computer does the same job in a few minutes. It's more like what we want as a user: top speed. Cost and time issues The construction of supercomputers is an extraordinary task and very expensive. Getting a machine from the laboratory to the market may take several years. The cost of developing the latest supercomputers varies between 150 to 500 million dollars or more. You can imagine that such a project refers to all the resources that the company has. This is one of the main reasons that the development of supercomputers is kept quiet. The latest supers can only be made with the help of the government and one or more large companies. Using supercomputers is also expensive. As a user, you will be charged according to the time you use the system what is expressed in the number of processors (CPU) seconds your program runs. In the past, Cray's time (one of the first supercomputers) was $ 1,000 per hour. The use of "Cray time" is a very common way to declare computer costs in time and dollars. Why do we need supercomputers? Well, as a normal person on the road, you don't. Your cellphone or PDA has more computing power than the first mainframe like ENIAC or Mark1. With a glut of information flooding your senses, and swollen software trying to channel it, you might need extreme computing power in as few decades as possible. The architecture in creating that power is already on the horizon: wireless LAN, infobot technology, grid computing and the center of virtual computing so that computing together will all be part of our everyday tools. The computer will even be sewn into our clothes. (See MIT's wearable computing project) What really needs supercomputers today is that most scientists do mass computing at very high speeds. They use computers like that in all imaginable disciplines: space exploration and related imaging (describing intergalactic galaxies and matter), environmental simulations (effects of global warming) mathematics, physics (searching for the smallest parts of matter), gene technology (genes what is what makes us old), and many others. Other real world examples are: industrial applications and technology, financial systems and economics that cover a world where speed is very important. Also, more and more supercomputers are used to make simulations for building airplanes, creating new chemicals, new materials, and testing car crashes without having to hit a car. Supercomputers are used for applications that need more than a few days to get results or when the results are not possible for slower computers to count.



Building a supercomputer is something you have to plan very carefully because, once running, there will be no more major revisions. If that happens, the company loses millions of dollars and that can result in the cancellation of the project and try to make money from the technology developed or cause the company to go bankrupt or almost bankrupt. An example is Cray, since 2000 independent companies have again, but they have had a few difficult years. Management errors are another factor that causes a supercomputer project to go bankrupt. One example is the fifth generation project in Japan. A lot of spin off comes from the project, very true. But imagine the possibility if Japan succeeded. Third, of course, a period of economic downturn. Project stopped. A good example is Intel. In 2002, the company canceled its supercomputer project and took its loss. All of this does not tell us much about how supercomputers are built, but illustrates that not only is science determining what is built or successful. Surprisingly enough, supers are often built from existing CPUs but in the end all possibilities with existing hardware. Terms such as super scalar, vector oriented computing, and parallel computing are just a few of the terms used in this arena. Since 1995, supers have been built from GRID, which means arrays or groups of CPUs (even ordinary PCs) connected by special versions, for example, Linux; thus acting like a big machine. The cost of these super types is dramatically lower than the millions needed to build "conventional" type supercomputers. It's as if we can say "conventional" without lifting one or two eyebrows. The fact is that supers are the fastest machine in their time. Yes, we smiled back at ENIAC but, back in 1942, it was the fastest and magical machine around. The input / output speeds between data storage and memory media are also a problem, but no more than other types of computers and, because supercomputers all have unusually high RAM capacity, this problem can be largely solved by liberal applications of large amounts of money. (1) Individual processor speeds increase at any time, but at large costs in research and development. The reality is we are starting to reach the limits of silicon-based processors. Seymour Cray shows that gallium arsenide technology can be made functional, but it is very difficult to work with and very few companies now can make processors that can be used based on GeA. It was a problem like that in those years that Cray Computer was forced to get their own GeA casting so they could do their own work.


NEXT PAGE:

advertisement

Share This :
Pasamiyo

Saya adalah seorang guru di kota surabaya, fokus pada dunia pendidikan saya ingin berbagi pengetahuan melalui website ini

  1. Sedih melihatnya :( kebakaran ini menghanguskan keindahannya.

    ReplyDelete