Comments (0) | Eps2 A Touch of Cloth - Season 3 Episode 02: Too Cloth for Comfort: Part Two | Jim High

Visualization and Analysis Activities May 19, 2009

|
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
 26 views
of 37

Please download to get full document.

View again

Description
Performance Measures x.x, x.x, and x.x. Visualization and Analysis Activities May 19, 2009. Hank Childs VisIt Architect. Outline. VisIt project overview Visualization and analysis highlights with the Nek code Why petascale computing will change the rules Summary & future work. Outline.
Share
Transcript
Performance Measures x.x, x.x, and x.x Visualization and Analysis ActivitiesMay 19, 2009 Hank Childs VisIt Architect Outline
  • VisIt project overview
  • Visualization and analysis highlights with the Nek code
  • Why petascale computing will change the rules
  • Summary & future work
  • Outline
  • VisIt project overview
  • Visualization and analysis highlights with the Nek code
  • Why petascale computing will change the rules
  • Summary & future work
  • VisIt is a richly featured, turnkey application 27B element Rayleigh-Taylor Instability (MIRANDA, BG/L)
  • VisIt is an open source, end user visualization and analysis tool for simulated and experimental data
  • Used by: physicists, engineers, code developers, vis experts
  • >100K downloads on web
  • R&D 100 award in 2005
  • Used “heavily to exclusively” on 6 of world’s top 8 supercomputers
  • 4 Quantitative Analysis ? =
  • Data Exploration
  • Comparative Analysis
  • Visual
  • Debugging
  • Presentations Terribly Named!! Intended for more than just visualization! VisIt has a rich feature set that can impact many science areas. Meshes: rectilinear, curvilinear, unstructured, point, AMR Data: scalar, vector, tensor, material, species Dimension: 1D, 2D, 3D, time varying Rendering (~15): pseudocolor, volume rendering, hedgehogs, glyphs, mesh lines, etc… Data manipulation (~40): slicing, contouring, clipping, thresholding, restrict to box, reflect, project, revolve, … File formats (~85) Derived quantities: >100 interoperable building blocks +,-,*,/, gradient, mesh quality, if-then-else, and, or, not Many general features: position lights, make movie, etc Queries (~50): ways to pull out quantitative information, debugging, comparative analysis Client-server observations: areas. Good for remote visualization Leverages available resources Scales well No need to move data Additional design considerations: Plugins Multiple UIs: GUI (Qt), CLI (Python), more… localhost – Linux, Windows, Mac Graphics Hardware VisIt employs a parallelized client-server architecture. remote machine User data Parallel vis resources The VisIt team focuses on making a robust, usable product for end users.
  • Manuals
  • 300 page user manual
  • 200 page command line interface manual
  • “Getting your data into VisIt” manual
  • Wiki for users (and developers)
  • Revision control, nightly regression testing, etc
  • Executables for all major platforms
  • Day long class, complete with exercises
  • Slides from the VisIt class VisIt is a vibrant project with many participants. for end users. Over 50 person-years of effort Over one million lines of code Partnership between: Department of Energy’s Office of Nuclear Energy, Office of Science, and National Nuclear Security Agency, and among others GNEP funds LLNL to support GNEP codes at Argonne Project started Developers from LLNL, LBL, & ORNL Start dev in repo AWE enters repo 2005 R&D100 Partnership with CEA is developed UC Davis & UUtah research done in VisIt repo LLNL user community transitioned to VisIt SciDAC Outreach Center enables Public SW repo Saudi Aramco funds LLNL to support VisIt User community grows, including AWE & ASC Alliance schools 2000 2003 2007 Spring ‘07 2007 Summer‘07 ‘07-’08 ‘07-’08 2004-6 2005 Spring ‘08 Institutional support leverages effort from many labs VACET is funded More developers Entering repo all the time Fall ‘06 2008 Spring ‘09 Outline for end users.
  • VisIt project overview
  • Visualization and analysis highlights with the Nek code
  • Why petascale computing will change the rules
  • Summary & future work
  • Flow analysis for 217 pin simulation / 1 billion grid points for end users. 217 pin reactor cooling simulation. Run on ¼ of Argonne BG/P. Flow analysis for 217 pin simulation / 1 billion grid points for end users. Tracing particles through the channels (work in progress) for end users. Observe which channels the for end users. particles pass through Observe where particles come out Tracing particles through the channels (work in progress) Place 1000 particles in one channel White triangle shows current channel for end users. Tracing particles through the channels (work in progress)
  • Two different “matrices” to describe flow from channel I to channel J
  • Exit location versus travel time in channel
  • Issues: pathlines vs streamlines, 12X vs A12
  • The algorithm for advecting particles is complex. for end users.
  • Two extremes:
  • Partition data over processors and pass particles amongst processors
  • Parallel inefficiency!
  • Partition seed points over processors and process necessary data for advection
  • Redundant I/O!
  • Hybrid solution:
  • Master-slave approach that adapts between parallel inefficiencies and redundant I/O
  • SC09 submission
  • The VisIt project provides outstanding leverage to this campaign.
  • The particle advection code represents 2+ man-years of effort from developers at UC Davis, Oak Ridge, Lawrence Berkeley, and Lawrence Livermore
  • Efforts of VACET, a SciDAC center for visualization and analysis
  • The development time to adapt this algorithm for our custom analysis is on the order of weeks.
  • Further, VisIt represents 50+ man-years of effort, much of which is highly relevant to this campaign, including:
  • Parallel infrastructure
  • Visualization algorithms (contouring, etc)
  • Parallel rendering
  • Etc.
  • Movie of 37 pin simulation campaign. Movie of “fish tank” simulation campaign. “Streamlines” to show movement within the fish tank campaign. Summary of activities campaign.
  • Activities are a mix of development and support
  • Development
  • File format readers
  • New analysis capability
  • Bug fixes / usability
  • Tuning, tuning, tuning
  • Support
  • Movies
  • “How do I …?”
  • Scripts
  • Outline campaign.
  • VisIt project overview
  • Visualization and analysis highlights with the Nek code
  • Why petascale computing will change the rules
  • Summary & future work
  • Petascale visualization and analysis will change the rules. campaign. Michael Strayer (U.S. DoE Office of Science): “petascale is not business as usual” Especially true for visualization and analysis! Large scale data creates two incredible challenges: scale and complexity Scale is not “business as usual” Current trajectory for terascale postprocessing will be cost prohibitive at the petascale We will need “smart” techniques in production environments More resolution leads to more and more complexity Will the “business as usual” techniques still suffice? Shortened out complexity portion of this talk: data anlaysis is key. SC and dedicated cluster share disk campaign. Dedicated cluster has good I/O access SC runs lightweight OS; dedicated cluster runs Linux Graphics cards on dedicated cluster Current modes of terascale visualization and analysis (1) ASC BG/L Gauss I/O Simulation, processing both done on purple campaign. Simulation writes to disk, vis. job reads from disk SC runs full OS (AIX) No graphics cards Current modes of terascale visualization and analysis (2) ASC Purple ASC Purple I/O Portion of purple for Vis & analysis Rayleigh Taylor instability by MIRANDA code campaign. 27 billion elements Run on ASC BG/L Visualized on gauss using VisIt These modes of processing have worked well at the terascale. Further observations about the “terascale” gameplan. campaign. No need to move data. (Important!) Not scaling up to huge numbers of cores. Current algorithm used by major vis tools (VisIt, EnSight, ParaView): Read in all data from disk and keep in primary memory. Buying a dedicated vis. machine will be cost-prohibitive at the petascale. Why is it so expensive? Visualization and analysis is: I/O-intensive Memory-intensive Compute has become cheap; memory and I/O are still expensive. ASC BG/L Gauss 512 procs $1-$2M ~600 TF I/O 4 years: 5 PF 5000? procs $15M (!!!) Visualization performance is based on total memory and I/O. the petascale. Trend for next generation of supercomputers is weaker and weaker relative I/O To maintain performance, we need more I/O So we will have to use more nodes to get more I/O Recent series of hero runs for 1 trillion cell data set: 16K cores I/O = ~5 minutes, processing = ~10 seconds “Petascale machine” Terascale machine
  • Vis is almost always >50% I/O and sometimes 98% I/O
  • Amount of data to visualize is typically O(total mem)
  • --> Relative I/O (ratio of total memory and I/O) is key
  • Memory I/O FLOPs Using a portion of the SC is also problematic at the petascale. Fundamentally, we are I/O-bound, not compute bound multi-core has limited value-added To increase I/O, we will need to use more of the machine ASC Purple ASC Purple I/O Additionally: petascale. Use cases are “bursty” – do we want the supercomputer sitting idle while someone studies results? Can we afford to devote a large portion of the SC to visualization and analysis? Lightweight OS’s present challenges Using a portion of the SC is also problematic at the petascale. ASC Purple ASC Purple I/O Anedoctal evidence: relative I/O is getting slower at LLNL. petascale. Time to write memory to disk
  • “I/O doesn’t pay the bills.”
  • Part of the problem is HW (I/O & memory), but the rest is software. Production visualization and analysis tools use “pure parallelism”. Research has established “smart techniques”: viable, effective alternatives to pure parallelism Out of core processing In situ processing Multi-resolution techniques Not going to dig in on these, but none is a panacea in isolation. There are gaps here (production-readiness & more) These techniques are difficult to implement. Petascale computing makes them cost effective. Summary of techniques software.
  • Pure parallelism can be used for anything, but it takes a lot of resources
  • Smart techniques can only be used situationally.
  • Strategy 1:
  • Stick with pure parallelism and live with high machine costs.
  • Other strategies?
  • Here are some assumptions:
  • We’re not going to buy massive dedicated clusters
  • We can fall back on the super computer, but only rarely
  • Alternate strategy: smart techniques software. Multi-res In situ Use P.P. for remaining ~5% on SC Out-of-core All visualization and analysis work Outline software.
  • VisIt project overview
  • Visualization and analysis highlights with the Nek code
  • Why petascale computing will change the rules
  • Summary & future work
  • Summary & Future Work software.
  • This effort enables Nek (& others) to successfully meet their visualization and analysis needs.
  • Lots of future work:
  • In situ visualization and analysis
  • Support for the petascale
  • Subsetting and enhancements for code interoperability
  • Energy groups
  • Continued support for analysis, scripts, movies, bugs, etc. (LOTS of time spent here)
  • Related Search
    We Need Your Support
    Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

    Thanks to everyone for your continued support.

    No, Thanks