Technology News All about technology

5Feb/140

Scientific Minds Flips the Science Classroom


Orange, TX (PRWEB) September 26, 2013

Scientific Minds has recently released two products, Biology Starters and Chemistry Starters, in a new platform designed to aid educators in flipping the classroom.

In a flipped classroom, the teacher doesn't deliver lecture material during class. Instead, students use online video lessons or lectures to learn fundamental concepts on their own time. When students come to class, they are prepared to ask questions, apply their knowledge in problem solving, and participate in project-based learning.

The idea of flipping the classroom, or flip teaching, has been around since the mid 1990s, but the reversed teaching method has received more attention in the last few years with increasing education research and implementation at the high school and college level. Company Founder Kathy Reeves states, "High school teachers who use the flipped classroom typically spend a lot of time preparing quality, online lessons. In creating Biology Starters and Chemistry Starters, we've done this work for them."

Both Biology Starters and Chemistry Starters are digital video lessons that break down difficult science concepts into "chunks" of knowledge that students more readily learn. The new platform includes interactive flashcards, over 1,000 in the Biology product, as well as quizzes that provide feedback. In each Starter, students view a video lesson, interactively review flashcards, and take a quiz as often as needed until they feel ready to apply the knowledge.

Some colleges, like the University of Wisconsin-Madison, have been using a flipped approach for years. Others, like the Engineering and Education Research Center at the University of Pittsburgh's Swanson School of Engineering, are just beginning to give it a try. Professors at the Swanson School of Engineering are chunking their lectures into 10-minute segments that students watch outside of class.

Scientific Minds has similarly built science education products by chunking state and national science standards into 5-10 minute video lessons that clarify difficult concepts by combining informative text with engaging images, graphics, and animations. The 110 Biology Starters and 101 Chemistry Starters address 100% of biology and chemistry standards in most states and are aligned to the NGSS.

Both products are sold as a one-year site license and are updated each year to meet changing state and national standards. The company offers a free 7-day trial at http://www.ScientificMinds.com.

About Scientific Minds, LLC

Scientific Minds, LLC publishes award-winning online resources for K-12 science education. Founded in 2007 by a veteran science teacher, Scientific Minds, LLC provides tools and processes to enhance science instruction and includes strategies to support all students. The company mission is to develop quality, web-based educational products that inspire, encourage, and promote next-generation skills for student success.







7Oct/120

The God Problem, New Book by Noted Scientific Thinker Howard Bloom, Does an Intelligent End-Run around Intelligent Design


Brooklyn, NY (PRWEB) September 10, 2012

How does an inanimate universe manage the God-like feat of creating itself from absolutely nothing? And what enables it, after pulling itself up by its own cosmic bootstraps, to continue to generate billions of years worth of stunningly creative new forms all by itself?

Humanity, according to novelist Martin Amis, is at least five Einsteins away from discovering the algorithms necessary for explaining the existence of a developing universe with no god at the rudder. But Amis may be overly pessimistic. Why? Because of Howard Bloom, who in his new book The God Problem (Prometheus Books, hardcover, $ 28.00), tackles the question of how a godless cosmos creates. A book in which Bloom offers what some say is the next paradigm.

The God Problem: How a Godless Cosmos Creates has been compared to Newtons Principia and Darwins Origin of Species. Its been praised by one Nobel Prize winner and two MacArthur Genius Award winners. If Howard Bloom is only 10% right, says author and science-junkie Barbara Ehrenreich, well have to drastically revise our notions of the universe.

Why?

The God Problem sets up no small job for itself. Bloom, whos been called the next Stephen Hawking and the Einstein, Newton, Darwin, and Freud of the 21st Century, poses the same seemingly unanswerable question that adherents of Intelligent Design use to continually poke godless scientists: How can a universe without a creator not only come into existence, but continue to create marvelous new complexities for billions and billions of years? The intellectual and scientific journey Bloom treats us to is simply delightful. Nobel Prize-Winner Dudley Herschbach calls The God Problem truly awesome. And Heinz Insu Fenkl, who heads The State University of New Yorks Interstitial Studies Institute, says The God Problem is the next paradigm. It will take you to a place from which you will never re-emerge, a brand new universe in the same skin as the one you now unknowingly inhabit.

Loop quantum gravity cosmologist Martin Bojowald goes one step further. He says The God Problem is "entertaining, suspenseful, rigorous, and thoroughly mathematical." Yes, rigorous and thoroughly mathematical. In other words, The God Problem may read like a detective novel, but its new ideas are important scientific contributions.

As with many great philosophers and scientists, including Richard Feynman and Albert Einstein, Blooms uncanny perspective is rooted in a willingness to stand the cherished tenets of science squarely on their head. Bloom starts by upending one of the most cherished concepts of allentropy, the Second Law of Thermodynamics. The second rule of Science, going back to Galileo and Anton van Leeuwenhoek, says Bloom, is to look at whats right under your nose as if youve never seen it before. To find your hidden assumptions and to flip them. Assumption flipping has been an enormously productive strategy for science. By simply overturning one assumption, one axiomthe absolute character of time, for exampleEinstein was able to generate a whole new vision of the universe, one with extraordinary predictive powers. But the real trick is not just changing our assumptions. Its finding them.

What if we reverse the second law of thermodynamics? What kind of universe will that give us? In fact, it will give us a universe very much like the one we inhabit. But Bloom does more than just reverse one assumption. He reverses five. He does it with what he calls The Five Heresies:

1. a does not equal a

2. one plus one does not equal two

3. entropy is wrong

4. randomness is not as random as you think and

5. information theory is way off base.

In his book Evolutionaries, Carter Phipps says that Blooms ability to raise sciences next big questions, then to make the result delicious, is like that of Carl Sagan. On his quest for the simple rules that kicked off the cosmos, the magic beans that account for the stunning progression from nothingness to everything we see around usincluding usBloom gives tantalizing glimpses of the cutting-edge science being produced by other axiom flippers. For example, Bloom shows us how Stephen Wolfram is proving that humanitys 6,000-year-old mathematical enterprise is only one of an infinite number of possible math systemswhich makes the traditional mathematical structure an unlikely tool for accurately modeling the universe we live in. And Bloom shows why Wolfram recommends jettisoning traditional mathematics entirely and relying on computers and cellular automata to spit out entire new universes based on different starting assumptions.

Wolfram, like Bloom, shows that the next big truths may come from sciences heretics. And Bloom, to paraphrase evolutionary biologist David Sloan Wilson, is a heretic beyond all heretics.

In the process of sussing out the rules of a universe able to do everything it does without a bathrobed, bearded god at the controls, Bloom takes the reader on a tour of a hidden history. He reveals the invisible underside of almost ten thousand years of philosophy, mathematics, and science. He reveals the roots of the tools with which you and I think every day. And in the process he holds up those tools to the light and gives you and me a crack at upgrading them. In the words of the former Director of Research for the Guggenheim Foundation and founder of Rutgers Universitys anthropology department Robin Fox, author of The Tribal Imagination, Bloom takes us on a magic carpet ride of ideas about: well, about everything. And it turns out that everything we knew about everything is probably wrong. The result, says Fox, is an intellectual cave of wonders made more wonderful by the tales of the lives of the people behind the ideas.

The God Problem is also essential reading if youre curious about the beginning, middle, and end of the cosmos. The book tells the tale of how Bloom, as a sixteen-year-old, developed a theory that the universe may actually be what topologists call a torus, that is, a doughnut, a bagel. Bloom tossed his 1959 theory away as mere comic book science. But in 1980, something that Big Bagel theory had predicted was made a standard part of physics by Alan Guthinflation. And in 1998, another prediction of Big Bagel theory proved to be truethat at a certain point the universe would begin to accelerate away from itself. Which made the Big Bagel one of the few theories able to explain two of physics and cosmologys greatest current mysteries: the propulsive force known as dark energy; and why theres so much matter in this universe and so little anti-matter. In The God Problem, Bloom offers a nifty demonstration of what Big Bagel theory means for the future--and the end--of the cosmos.

The universe, and all its glory, may indeed be explainable without a God, says Bloom. Thank God Howard Bloom is here to explain how.

For more information, visit http://howardbloom.net.

The God Problem: How a Godless Cosmos Creates

By Howard Bloom

Prometheus Books, ISBN: 978-0-1-61614-551-4; hardcover, 708 pp., $ 28.00

About Howard Bloom

Howard Bloom, a former visiting scholar at New York University, the founder of the International Paleopsychology Project, the founder of the Space Development Steering Committee (an organization that includes Buzz Aldrin, Edgar Mitchell, the sixth astronaut on the moon, and members from NASA, the National Science Foundation, and the National Space Society). Bloom is a founding board member of the Epic of Evolution Society, and a member of the New York Academy of Sciences, the National Association for the Advancement of Science, the American Psychological Society, the Human Behavior and Evolution Society, The European Sociobiological Society, and the Academy of Political Science.

27Jul/120

Virginia Tech Nutrition Professor & Published Author Brenda Davy Named to WellBalance Weight Loss Camps Scientific Advisory Board


Asheville, NC (PRWEB) May 31, 2012

Dr. Brenda Davy, Associate Professor of Human Nutrition, Foods, & Exercise at Virginia Tech, has been named to the Scientific Advisory Board for WellBalance, a leading health organization that runs weight loss summer camps for adolescents ages 10 20.

Davy has agreed to provide recommendations to WellBalance for improving a clients overall health through better teen diet, physical activity, and weight management strategies. Davys expertise in improvement of weight loss diet and health behaviors will be beneficial to WellBalance customers on their journey towards meeting their summer weight loss camp and health goals.

Dr. Davy has an enormous amount of respect in the scientific world due to the research she has led, said John Taylor, Vice President of Programs for WellBalance and a celebrity fitness expert. Dr. Davy is one of the nations leading experts on helping individuals create healthy behaviors, something that the entire nation is attempting to implement as a way to fight childhood obesity. WellBalance is honored to have her as a member of our Scientific Advisory Board, and we feel that her opinions will help our clients progress in their journey towards healthy living.

Davy has a number of publications to her credit including Translational Research: Bridging the gap between long-term weight loss maintenance research and practice, which appeared in the Journal of the American Dietetic Association. Davy is also a Registered Dietitian, and studies the relationship between beverage consumption and weight management.

I am very happy to join WellBalance as a member of their Scientific Advisory Board said Davy. I look forward to helping WellBalance clients adopt weight management strategies and diet behaviors that will enable them to lead healthier lives.

Dr. Davy earned her Ph.D. in Human Nutrition from Colorado State University in 2001. She also earned a M.S. in Exercise Physiology and B.S. in Human Nutrition from Virginia Tech, and in 2010, was featured in the WebMD article Study Shows Drinking Water Helps People Lose Weight and Keep the Pounds Off, which was a report based on her research. This study, published in the journal Obesity in 2010, received national and international media attention.

###

WellBalance fitness and weight loss health camps designed the ME Plan to Motivate & Educate on what medical research shows works for sustainable fitness, weight loss, and health success. Founded by professionals and guided by experts who have led some of the largest behavioral health, mental health, and treatment programs in the country, WellBalance is working to become the leader with a focus on improving an individuals overall health. WellBalance developed the WellBalance Health Score

29Jun/110

Leading Experts to Present at High-Performance Computing Online Event on September 14 Produced by Scientific Computing and Sponsored by hp, Microsoft and Visual Numerics

Leading Experts to Present at High-Performance Computing Online Event on September 14 Produced by Scientific Computing and Sponsored by hp, Microsoft and Visual Numerics










New York, NY (PRWEB) September 7, 2006

The High-Performance Computing (HPC) Online Expo, the second annual educational event for technical computing professionals seeking knowledge about data intensive computing and information technology, will be held on September 14, 2006, from 9:30 a.m. – 4:30 p.m. EDT.

To register for this free online event, click here:

http://hpc.unisfair.com/index.jsp?code=RBI_HPC_PRESS

CONFERENCE-AT-A-GLANCE:

10:00 a.m. EDT: The Confluence of Traditional Scientific Disciplines with Heterogeneous Computing, Eric Jakobsson, Ph.D., Professor, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign.

More details: http://www.reedbusinessinteractive.com/hpc/10am.asp

11:00 a.m. EDT: Data-Driven Science and Cyberinfrastructure,

case study panel discussion with professors from Cornell University.

More details: http://www.reedbusinessinteractive.com/hpc/11am.asp

12:00 p.m. EDT: Back to the Future - DC Powering Your Data Center, Bill Tschudi, Project Leader, Lawrence Berkeley National Laboratory.

More details: http://www.reedbusinessinteractive.com/hpc/12pm.asp

1:00 p.m. EDT: Advancing Academic Research: High Performance Computing Applications, case study panel discussion with professors from Louisiana State University.

2:00 p.m. EDT: High Performance Grid Computing: Case Studies & Applications, Wolfgang Gentzsch, Ph.D., Managing Director, Grid Computing and Networking Services, Renaissance Computing Institute.

Each session of HPC Expo 2006 will be followed by a live, interactive Question and Answer session. In addition, event sponsors HP, Microsoft and Visual Numerics will host interactive exhibit booths, enabling attendees to download information or chat with company representatives.

For registration and additional information, please click here: http://hpc.unisfair.com/index.jsp?code=RBI_HPC_PRESS

About Unisfair

Unisfair provides world leading solutions for scalable online events such as web conferences, virtual tradeshows and online expos. Unisfair solutions enable companies to maximize their reach to their target audiences, generate qualified business leads, distribute knowledge and increase brand awareness.

Unisfair serves as a one-stop solution provider for its customers, offering a comprehensive package of event planning, management, marketing, audience generation, creative and production services. Since the year 2000, Unisfair has launched thousands of online events for leading companies, including Advanstar Communications, Avaya, Business Week, IBM, Nortel, Reed Business Information, Source Media, The Economist.com and the US Department of State.

For more information about Unisfair, please visit at, http://www.unisfair.com.

Trademarks and registered trademarks contained herein remain the property of their respective owners.

Company contact:

Uriah Av-Ron

+972-50-7-427-087

# # #







Attachments





















Vocus©Copyright 1997-

, Vocus PRW Holdings, LLC.
Vocus, PRWeb, and Publicity Wire are trademarks or registered trademarks of Vocus, Inc. or Vocus PRW Holdings, LLC.







29Jun/110

Leading Experts to Present at High-Performance Computing Online Event on Sep. 14 Produced by Scientific Computing and Sponsored by hp, Microsoft and Visual Numerics

Leading Experts to Present at High-Performance Computing Online Event on Sep. 14 Produced by Scientific Computing and Sponsored by hp, Microsoft and Visual Numerics










New York, NY (PRWEB) September 12, 2006

The High-Performance Computing (HPC) Online Expo, the second annual educational event for technical computing professionals seeking knowledge about data intensive computing and information technology, will be held on September 14, 2006, from 9:30 a.m. – 4:30 p.m. EDT.

To register for this free online event, click here:

http://hpc.unisfair.com/index.jsp?code=RBI_HPC_PRESS

CONFERENCE-AT-A-GLANCE:

10:00 a.m. EDT: The Confluence of Traditional Scientific Disciplines with Heterogeneous Computing, Eric Jakobsson, Ph.D., Professor, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign.

More details: http://www.reedbusinessinteractive.com/hpc/10am.asp

11:00 a.m. EDT: Data-Driven Science and Cyberinfrastructure,

case study panel discussion with professors from Cornell University.

More details: http://www.reedbusinessinteractive.com/hpc/11am.asp

12:00 p.m. EDT: Back to the Future - DC Powering Your Data Center,

Bill Tschudi, Project Leader, Lawrence Berkeley National Laboratory.

More details: http://www.reedbusinessinteractive.com/hpc/12pm.asp

1:00 p.m. EDT: Advancing Academic Research: High Performance Computing Applications, case study panel discussion with professors from Louisiana State University.

2:00 p.m. EDT: High Performance Grid Computing: Case Studies & Applications, Wolfgang Gentzsch, Ph.D., Managing Director, Grid Computing and Networking Services, Renaissance Computing Institute.

Each session of HPC Expo 2006 will be followed by a live, interactive Question and Answer session. In addition, event sponsors HP, Microsoft and Visual Numerics will host interactive exhibit booths, enabling attendees to download information or chat with company representatives.

For registration and additional information, please click here:

http://hpc.unisfair.com/index.jsp?code=RBI_HPC_PRESS

About Unisfair

Unisfair provides world leading solutions for scalable online events such as web conferences, virtual tradeshows, and online expos. Unisfair solutions enable companies to maximize their reach to their target audiences, generate qualified business leads, distribute knowledge, and increase brand awareness.

Unisfair serves as a one-stop solution provider for its customers, offering a comprehensive package of event planning, management, marketing, audience generation, creative and production services. Since the year 2000, Unisfair has launched thousands of online events for leading companies, including Advanstar Communications, Avaya, Business Week, IBM, Nortel, Reed Business Information, Source Media, The Economist.com, and the US Department of State.

For more information about Unisfair, please visit at, http://www.unisfair.com.

Note to editors: Trademarks and registered trademarks contained herein remain the property of their respective owners.

Company contact:

Uriah Av-Ron

+972-50-7-427-087

# # #







Attachments





















Vocus©Copyright 1997-

, Vocus PRW Holdings, LLC.
Vocus, PRWeb, and Publicity Wire are trademarks or registered trademarks of Vocus, Inc. or Vocus PRW Holdings, LLC.







9Apr/110

New Sony & Panasonic DVD Recorders from Scientific Vision Systems Allow Medical Imaging Systems to Go Digital and Paperless

New Sony & Panasonic DVD Recorders from Scientific Vision Systems Allow Medical Imaging Systems to Go Digital and Paperless










Carlsbad, CA (PRWEB) September 11, 2005

Scientific Vision Systems today announced immediate availability of two new medical grade DVD Recorders, one from Sony, the other from Panasonic. The Sony DVO-1000MD is the only medical grade DVD recorder that features a built-in hard drive while the Panasonic LQ-MD800 records directly to the DVD.

The Sony unit uses DVD+RW media, which allows the user to eject the DVD in less than two minutes following completion of recording. This represents a significant time savings versus the typical 5 to 15 minute wait time. Thus, patient workflow is uninterrupted. The unit also features a significant data recovery feature called DVORECOVERY in the form of a built-in 80 GB hard drive. The DVO-1000MD simultaneously records to both the hard drive and the DVD+RW disc. In the event of power failure or other disruption, the technician can recapture the video from the hard drive with only a maximum video loss of four seconds.

The Panasonic unit utilizes DVD-RAM or DVD-R discs. Panasonic's DVD RAM discs are housed in a protective cartridge, perfect for storing patient data in potentially unfavorable medical environments. Both units record in either NTSC or PAL formats.

According to Ben Stluka, Medical Imaging Sales Manager for Scientific Vision Systems, “These DVD recorders are intended to replace outdated VCR technology, which is rapidly disappearing. Either of these units will serve as a drop-in replacement for legacy or newer ultrasound, endoscopy, or other medical imaging systems with a video output. These recorders allow medical facilities to further the transition to fully digital, paperless imaging operations.”

Stluka further notes that an article that appeared in the June 20, 2005 issue of Time Magazine points to an e-health, all-digital paperless medical practice. To quote the article, “The U.S. government is leading this charge into the medical information age--robustly and, by most accounts, effectively--because it pays 46% of the nation's medical bills. Dr. Mark McClellan, former head of the FDA and now director of the Centers for Medicare and Medicaid Services, is making paperless medicine mandatory for physicians who want to participate in the agency's potentially remunerative pay-for-performance scheme.” *

Medical facilities and imaging equipment manufacturers who would like more information can contact Scientific Vision Systems for more information at (760) 929-8133 or the company's website at: http://www.svsimaging.com.

About Scientific Vision Systems: Located in Carlsbad, CA, Scientific Vision Systems is one of the largest distributors of medical video equipment in the United States. The company distributes for such leading manufacturers as Sony, Mitsubishi, Panasonic, Hitachi, and JVC.

*Used with permission from Time Magazine.

###







Attachments


























Vocus©Copyright 1997-

, Vocus PRW Holdings, LLC.
Vocus, PRWeb, and Publicity Wire are trademarks or registered trademarks of Vocus, Inc. or Vocus PRW Holdings, LLC.







28Mar/110

Appro and San Diego Supercomputer Center Launches Trestles, The Nation?s Largest Open-Access Scientific Discovery Infrastructure

Appro and San Diego Supercomputer Center Launches Trestles, The Nation’s Largest Open-Access Scientific Discovery Infrastructure











trestles


Milpitas, CA (Vocus/PRWEB) March 03, 2011

Appro (http://www.appro.com), a leading provider of supercomputing solutions, announces the deployment of an innovative high performance computer (HPC) system named “Trestles” by the San Diego Supercomputer Center (SDSC) at UC San Diego. The system is based on Quad-socket, 8-Core AMD Opteron compute nodes connected via a QDR InfiniBand Fabric configured by SDSC and Appro. This project was the result of a $ 2.8 million award from the National Science Foundation (NSF).

Trestles is now available to users of the TeraGrid, the nation’s largest open-access scientific discovery infrastructure. The system is among the five largest in the TeraGrid repertoire, with 10,368 processor cores, a peak speed of 100 teraflop/s, 20 terabytes memory, and 38 terabytes of flash memory. One teraflop (TF) equals a trillion calculations per second, while one terabyte (TB) equals one trillion bytes of information. Debuting at #111 on the top 500 list of supercomputers in the latest ranking, Trestles will work with and span the deployments of SDSC’s recently introduced Dash system and a larger data-intensive system, the Appro Xtreme-X™ Supercomputer, named“Gordon” by SDSC to become operational in late 2011.

All three SDSC systems employ flash-based memory, which is common in much smaller devices such as mobile phones and laptop computers but unique for supercomputers, which generally use slower spinning disk technology.

“Trestles is appropriately named because it will serve as a bridge between SDSC’s unique, data-intensive resources available to a wide community of users both now and into the future,” said Michael Norman, SDSC’s director.

“UCSD and SDSC are pioneering the use of flash in high-performance computing,” said Allan Snavely, associate director of SDSC and a co-PI for the new system. “Flash disks read data as much as 100 times faster than spinning disk, write data faster, and are more energy-efficient and reliable.”

“Trestles, as well as Dash and Gordon, were designed with one goal in mind; to enable as much productive science as possible as we enter a data-intensive era of computing,” said Richard Moore, SDSC’s deputy director and co-PI. “Today’s researchers are faced with sifting through tremendous amounts of digitally based data, and such data-intensive resources will give them the tools they need to do so.” Moore added that that Trestles offers modest-scale and gateway users rapid job turnaround to increase researcher productivity, while also being able to host long-running jobs. Speaking of speed, SDSC and Appro brought Trestle into production in less than 10 weeks from initial hardware delivery. “We committed to getting the system in the hands of our users and meeting NSF’s production deadline,” noted Moore.

Early User Successes

Early users of SDSC’s Trestles include Bridget Carragher and Clint Potter, directors at the National Resource for Automated Molecular Microscopy at The Scripps Research Institute in La Jolla, Calif. Their project focuses on establishing a portal on the TeraGrid for structural biology researchers to facilitate electron microscopy (EM) image processing using the Appion pipeline, an integrated database-driven system.

"We are very excited about this early opportunity to use the Trestles infrastructure for high performance structural biology projects,” said Carragher. “Based on our initial experience, we are optimistic that this system will have a dramatic impact on the scale of projects we can undertake, and on the resolution that can be achieved for macromolecular structure.”

TeraGrid User-Friendly

To ensure that productivity on Trestles remains high, SDSC will adjust allocation policies, queuing structures, user documentation, and training based on a quarterly review of usage metrics and user satisfaction data. Trestles, along with SDSC’s Dash and Triton Resource clusters use a matrixed pool of expertise in system administration and user support, as well as the SDSC-developed Rocks cluster management software. SDSC’s Advanced User Support has already established key benchmarks to accelerate user applications, and subsequently will assist users in tuning and optimizing applications for Trestles. Full details of the new system can be found on the Trestles webpage.

Trestle’s policies are designed to meet the needs of that growing user base. NSF’s award to build and deploy Trestles was announced last August by SDSC, and Trestles will be available to TeraGrid users through 2013. In November 2009, SDSC announced a five-year, $ 20 million grant from the NSF to build and operate Gordon, the first high-performance supercomputer to employ a vast amount of flash memory. Dash, a smaller prototype of Gordon, was deployed in April 2010. All these systems are being integrated by Appro and use a similar design philosophy of combining commodity parts in innovative ways to achieve high-performance architectures.

About Appro

Appro is a leading developer of supercomputing solutions. Appro is uniquely positioned to support High Performance Computing markets focusing on medium to large-scale deployments. Appro accelerates technical data-intensive applications for faster business results through outstanding price/performance, balanced architecture coupled with latest technologies, open standards and engineering expertise. Appro headquarters is in Milpitas, CA with offices in Korea and Houston, TX. To learn more go to http://www.appro.com

About SDSC

As an Organized Research Unit of UC San Diego, SDSC is a national leader in creating and providing cyberinfrastructure for data-intensive research, and celebrated its 25th anniversary in late 2010 as one of the National Science Foundation’s first supercomputer centers. Cyberinfrastructure refers to an accessible and integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC is a founding member of TeraGrid, the nation’s largest open-access scientific discovery infrastructure.

###









Attachments
















Vocus©Copyright 1997-

, Vocus PRW Holdings, LLC.
Vocus, PRWeb, and Publicity Wire are trademarks or registered trademarks of Vocus, Inc. or Vocus PRW Holdings, LLC.







19Mar/110

AMD Stream Processor First to Break 1 Teraflop Barrier : Next-generation AMD FireStream? 9250 Processor Accelerates Scientific and Engineering Calculations, Efficiently Delivering Supercomputer Performance at up to Eight Gigaflops-Per-Watt

AMD Stream Processor First to Break 1 Teraflop Barrier : Next-generation AMD FireStream™ 9250 Processor Accelerates Scientific and Engineering Calculations, Efficiently Delivering Supercomputer Performance at up to Eight Gigaflops-Per-Watt












DRESDEN, Germany (PRWEB) June 15, 2008

Customers can leverage AMD's latest FireStream offering to run critical workloads such as financial analysis or seismic processing dramatically faster than with CPU alone, helping them to address more complex problems and achieve faster results. For example, developers are reporting up to a 55x performance increase on financial analysis codes as compared to processing on the CPU alone, which supports their efforts to make better and faster decisions.¹ Additionally, the use of flexible GPU technology rather than custom accelerators assists those creating application-specific systems to enhance and maintain their solutions easily.

The AMD FireStream 9250 stream processor includes a second-generation double-precision floating point hardware implementation delivering more than 200 gigaflops, building on the capabilities of the earlier AMD FireStream™ 9170, the industry's first GP-GPU with double-precision floating point support. The AMD FireStream 9250's compact size makes it ideal for small 1U servers as well as most desktop systems, workstations, and larger servers and it features 1GB of GDDR3 memory, enabling developers to handle large, complex problems.

Driving broad consumer adoption with open systems

AMD enables development of the FireStream family of processors with its AMD Stream SDK, designed to help developers create accelerated applications for AMD FireStream, ATI FireGL™ and ATI Radeon™ GPUs. AMD takes an open-systems approach to its stream computing development environment to ensure that developers can access and build on the tools at any level. AMD offers published interfaces for its high-level language API, intermediate language, and instruction set architecture; and the AMD Stream SDK's Brook+ front-end is available as open source code.

In keeping with its open systems philosophy, AMD has also joined the Khronos Compute Working Group. This working group's goals include developing industry standards for data parallel programming and working with proposed specifications like OpenCL. The OpenCL specification can help provide developers with an easy path to development across multiple platforms.

"An open industry standard programming specification will help drive broad-based support for stream computing technology in mainstream applications," said Rick Bergman, senior vice president and general manager, Graphics Product Group, AMD. "We believe that OpenCL is a step in the right direction and we fully support this effort. AMD intends to ensure that the AMD Stream SDK rapidly evolves to comply with open industry standards as they emerge."

Accelerating industry adoption

The growth of the stream computing market has accelerated over the past few years with Fortune 1000 companies, leading software developers and academic institutions utilizing stream technology to achieve tremendous performance gains across a variety of applications.

"Stream computing is increasingly important for mainstream and consumer applications and is no longer limited to just the academic or engineering industries. Today we are truly seeing a fundamental shift in emerging system architectures," said Jon Peddie, president, Jon Peddie Research. "As the industry's only provider of both high-performance discrete GPUs and x86-compatible CPUs, AMD is uniquely well-suited to developing these architectures."

AMD customers, including ACCIT, Centre de Physique de Particules de Marseille, Neurala and Telanetix are using the AMD Stream SDK and current AMD FireStream, ATI FireGL or ATI Radeon boards to achieve dramatic performance gains on critical algorithms in HPC, workstation and consumer applications. Currently, Neurala reports that it is achieving 10-200x speedups over the CPU alone on biologically inspired neural models, applicable to finance, image processing and other applications.2

AMD is also working closely with world class application and solution providers to ensure customers can achieve optimum performance results. Stream computing application and solution providers include CAPS entreprise, Mercury Computer Systems, RapidMind, RogueWave and VizExperts. Mercury Computer Systems provides high-performance computing systems and software designed for complex image, sensor, and signal processing applications. Its algorithm team reports that it has achieved 174 GFLOPS performance for large 1D complex single-precision floating point FFTs on the AMD FireStream 9250.3

Pricing and availability

AMD plans to deliver the FireStream 9250 and the supporting SDK in Q3 2008 at an MSRP of $ 999 USD. AMD FireStream 9170, the industry's first double-precision floating point stream processor, is currently available for purchase and is competitively priced at $ 1,999 USD. For more information about AMD FireStream 9250 or AMD FireStream 9170 or AMD's complete line of stream computing solutions, please visit http://www.amd.com/stream.

About AMD

Advanced Micro Devices (NYSE: AMD) is a leading global provider of innovative processing solutions in the computing, graphics and consumer electronics markets. AMD is dedicated to driving open innovation, choice and industry growth by delivering superior customer-centric solutions that empower consumers and businesses worldwide. For more information, visit http://www.amd.com.

1 RapidMind has reported a 55x speedup over CPU alone on binomial options pricing calculators. The comparison is versus Quantlib running on a single core of a Dual-Core AMD Opteron™ 2352 processor on Tyan S2915 w/ Win XP 32 (Palomar Workstation from Colfax)

2 Neurala comparison is against dual AMD Opteron 248 processor (using only a single processor for comparison) w/ 2GB SDRAM DDR 400 ECC dual channel and SUSE Linux 10 (custom kernel)

3 Mercury benchmark system details: Intel Core2 6820 @ 2.13 GHz w/ 3GB of RAM, FireStream 9250 stream processor

AMD, the AMD Arrow logo, Opteron, ATI, the ATI logo, FireStream, FireGL, Radeon, and combinations thereof, are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.





















Vocus©Copyright 1997-

, Vocus PRW Holdings, LLC.
Vocus, PRWeb, and Publicity Wire are trademarks or registered trademarks of Vocus, Inc. or Vocus PRW Holdings, LLC.







3Mar/110

New CULA? GPU-Accelerated Math Library Brings Faster Solvers to Millions of Scientific Applications



NVIDA GPU Technology Conference, San Jose, CA (PRWEB) September 30, 2009

EM Photonics, Inc. today announced the general availability of CULA™, an implementation of the most widely used functions from the industry-standard LAPACK linear algebra interface. CULA is accelerated using NVIDIA's massively parallel CUDA-enabled GPUs and allows millions of developers, engineers, and scientists to experience the computational performance of supercomputers right at their desk.

"We are impressed by CULA's early performance results and know that our customers will be happy to receive GPU support for Jacket functions, such as QR, LU, and SVD, built using this technology," said John Melonakos, CEO of AccelerEyes. "We are pleased to partner with EM Photonics in the delivery of the fastest LAPACK routines for the MATLAB community."

"Impressive stuff!" wrote one of CULA Beta testers on the CULA forums. "I did some benchmarking tests yesterday and used MKL on a dual core processor with 6GB versus a GEForce GTX260. When the CPU was running two threads I still got up to around 6 times speed on both QR decomposition and SVD (linear algebra functions)!" Another user wrote: "It works beautifully. I get 4 times faster execution than MATLAB's QR using four processor cores. SVD is 23 times faster for 1024x1024 matrices!"

"Applications ranging from video games, to medical imaging, to scientific computing have come to depend on the superior processing capabilities of GPUs. By every measure, this trend is rapidly growing and impacting more and more markets," said Eric Kelmelis, CEO of EM Photonics. "To bridge the current gap between what GPUs can offer and how they can be used to accelerate applications, we have developed CULA in close association with NVIDIA. A broad range of users took advantage of our beta release over the last few months and achieved 5-10x performance gains over CPU implementations."

"The CULA linear algebra library enables developers for a wide range of technical computing applications including computational fluid dynamics, electronic design automation, finite element analysis, and electromagnetic simulations, to take advantage of the performance boost of the GPU," said Andy Keane, General Manager for the Tesla high-performance computing group at NVIDIA. "With this release, EM Photonics is making a meaningful addition to the NVIDIA CUDA eco-system by providing a mature, complete math library."

Pricing and Availability

CULA is available in three different versions: Basic, Premium and Commercial. CULA Basic is free of charge and includes six of the most popular LAPACK routines in single and single-complex precisions. CULA Premium costs $ 395 and is a significantly more robust version with additional routines in single, double, single-complex, and double-complex precisions. CULA Commercial pricing is available upon request. For complete details, please visit www.culatools.com.

Live Demonstrations at the NVIDIA GPU Technology Conference this week!

Watch videos and live demos of CULA accelerating simulated tomography image reconstruction and digital watermarking of video at the EM Photonics booth #37. If attending the conference, do not miss our session "CULA: Robust GPU Accelerated Linear Algebra Libraries," on Thursday, October 1st at 2:00pm.

About CULAtools™

CULAtools™ is EM Photonics' product family comprised of CULA™ Basic, Premium, and Commercial. CULA is our GPU-accelerated implementation of LAPACK - a collection of commonly used linear algebra functions used by millions of developers in the scientific and engineering community. After developing accelerated linear algebra solvers since 2004 for our clients, EM Photonics partnered with NASA Ames Research Center in 2007 to extend and unify these libraries into a single, GPU-accelerated package. Through a partnership with NVIDIA®, our GPU Gurus™ focused on developing a commercially available implementation of accelerated linear algebra routines. By leveraging NVIDIA's CUDA™ architecture, CULA provides users linear algebra functions with unsurpassed performance.

About EM Photonics

Headquartered in Newark, Delaware, EM Photonics is a recognized leader in implementing computationally intense algorithms on commodity hardware platforms. Using specialized computer architectures such as GPUs and FPGAs, EM Photonics accelerates their clients' applications to achieve better, faster results. We offer consulting services and custom-designed tools to commercial, government, and academic organizations seeking to optimize their scientific computing, image processing, and numerical analysis applications.

###

© 2009 EM Photonics, Inc. All rights reserved. EM Photonics, the EM Photonics logo, and CULAtools and CULA are trademarks of EM Photonics, Inc. NVIDIA, Tesla, and CUDA are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.





19Jan/110

DOE to Explore Scientific Cloud Computing at Argonne, Lawrence Berkeley National Laboratories



Argonne, IL, and Berkeley, CA (Vocus) October 15, 2009

Cloud computing is gaining traction in the commercial world, but can such an approach also meet the computing and data storage demands of the nation’s scientific community? A new program funded by the American Recovery and Reinvestment Act through the U.S. Department of Energy (DOE) will examine cloud computing as a cost-effective and energy-efficient computing paradigm for scientists to accelerate discoveries in a variety of disciplines, including analysis of scientific data sets in biology, climate change and physics.

Cloud computing refers to a flexible model for on-demand access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services, and software) that can be easily provisioned as needed. While shared resources are not new to high-end scientific computing, smaller computational problems are often run on departmental Linux clusters with software customized for the science application. Cloud computing centralizes the resources to gain efficiency of scale and permit scientists to scale up to solve larger science problems while still allowing the system software to be configured as needed for individual application requirements.

To test cloud computing for scientific capability, DOE centers at the Argonne Leadership Computing Facility (ALCF) in Illinois and the National Energy Research Scientific Computing Center (NERSC) in California will install similar mid-range computing hardware, but will offer different computing environments. The combined set of systems will create a cloud testbed that scientists can use for their computations while also testing the effectiveness of cloud computing for their particular research problems. Since the project is exploratory, it’s been named Magellan in honor of the Portuguese explorer who led the first effort to sail around the globe and for whom the “clouds of Magellan” – two small galaxies in the southern sky – were named.

One of the goals of the Magellan project is to explore whether cloud computing can help meet the overwhelming demand for scientific computing. Although computation is an increasingly important tool for scientific discovery, and DOE operates some of the world’s most powerful supercomputers, not all research applications require such massive computing power. The number of scientists who would benefit from mid-range computing far exceeds the amount of available resources.

“As one of the world’s leading providers of computing resources to advance science, the Department of Energy has a vested interest in exploring new options for meeting the overwhelming demand for computing time,” said Michael Strayer, associate director of DOE’s Office of Advanced Scientific Computing Research. “Both NERSC and ALCF have proven track records in deploying innovative new systems and providing essential support services to the scientists who use those systems, so we think the results of this project will be quite valuable as we chart future courses.”

DOE is funding the project at $ 32 million, with the money divided equally between Argonne National Laboratory and Lawrence Berkeley National Laboratory, where NERSC is located.

"Cloud computing has the potential to accelerate discoveries and enhance collaborations in everything from optimizing energy storage to analyzing data from climate research, while conserving energy and lowering operational costs," said Pete Beckman, director of Argonne’s Leadership Computing Facility and project lead. “We know that the model works well for business applications, and we are working to make it equally effective for science.”

At NERSC, the Magellan system will be used to measure a broad spectrum of the DOE science workload and analyze its suitability for a cloud model by making Magellan available to NERSC’s 3,000 science users. NERSC staff will use performance-monitoring software to analyze what kinds of science applications are being run on the system and how well they perform on a cloud.

“Our goal is to get a global picture of Magellan’s workload so we can determine how much of DOE’s mid-range computing needs could and should run in a cloud environment and what hardware and software features are needed for science clouds,” said NERSC Director Kathy Yelick. “NERSC’s users will play a key role in this evaluation as they will bring a very broad scientific workload into the equation and help us learn which features are important to the scientific community.”

Looking at a spectrum of DOE scientific applications, including protein structure analysis, power grid simulations, image processing for materials structure analysis and nanophotonics and nanoparticle analysis, the Magellan research team will deploy a large cloud test bed with thousands of Intel Nehalem CPU cores. The project will also explore commercial offerings from Amazon, Microsoft and Google.

In addition, Magellan will provide data storage resources that will be used to address the challenge of analyzing the massive amounts of data being produced by scientific instruments ranging from powerful telescopes photographing the universe to gene sequencers unraveling the genetic code of life. NERSC will make the Magellan storage available to science communities using a set of servers and software called “Science Gateways,” as well as experiment with Flash memory technology to provide fast random access storage for some of the more data-intensive problems.

The NERSC and ALCF facilities will be linked by a groundbreaking 100 gigabit-per-second network, developed by DOE’s ESnet (another DOE initiative funded by the Recovery Act). Such high bandwidth will facilitate rapid transfer of data between geographically dispersed clouds and enable scientists to use available computing resources regardless of location.

“It is clear that cloud computing will have a leading role in future scientific discovery,” added Beckman. “In the end, we will know which scientific application domains demonstrate the best performance and what software and processes are necessary for those applications to take advantage of cloud services.”

About Argonne and the ALCF

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne operates the ALCF for the DOE Office of Science as part of the larger DOE Leadership Computing Facility strategy. DOE leads the world in providing the most capable civilian supercomputers for science. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.

About NERSC and LBNL

The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy’s Office of Science. The NERSC Center currently serves thousands of scientists at national laboratories and universities across the country, researching problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the DOE Office of Science.

# # #





Find More Computer Press Releases

Page 1 of 11