About severe.im
Hi there.
Welcome to severe.im. Glad you found me. This is my crude weather page, where I plot satellite, radar, model, and surface data. I use many of these images in my daily teaching, but I'm happy to make them available publicly for anyone to access and share.
If you ever spot a problem, or want to reach out, there's always a contact form for that.
Below, some details about each of the sections of the main page -- what data is used, where it comes from, and so on. If you're looking at the main page from a phone, you might not realize that the desktop version shows two columns (why I refer to left and right, below).
Satellite Imagery (Left Column)
Both sections of satellite imagery use data from NOAA's GOES-19 satellite. For both sets of images, the channels used are:
- Visible: channel 2 (0.64 microns), the "red" band
- Infrared: channel 13 (10.3 microns), the "clean longwave window" band
- Water vapor: channel 8 (6.2 microns), the "upper-level tropospheric water" band
Main set of images
These data are available thanks to NOAA's agreement with the AWS (Amazon) Data Exchange. The NetCDF data files are plotted using a slew of Python libraries, to allow custom color tables and map projection adjustments. Although new data is available every 5 minutes from the satellites, I only download and plot new images every 15 minutes.
At the moment (July 2025), these images are hosted on a different server but you should only notice it if you look at the raw HTML for each page. Eventually, I'll put everything back together in one place. Shouldn't affect the experience.
"Alternate" set of images
The alternate images are downloaded directly from the NESDIS Center for Satellite Application and Research (STAR) website and are then cropped using ImageMagick to produce each sector. Each page is a JavaScript loop that uses the long-running HTML5 AnimationS app (HAniS) from the University of Wisconsin.
Past data
For many, many years, I relied on a Unidata data feed that had satellite data available in AREA format and could be easily plotted in Gempak. But their server became less reliable in summer 2025 (federal support revocation / work stoppage issues), so images produced that way are no longer available.Surface Station Plots (Left Column)
Surface data is received from a data feed provided by the Federal Aviation Administration (FAA), and maps are plotted for different parts of the country. In my first-year courses, I tend to skip the decoding (and contouring) of pressure, so the simple plots are quite accessible.
The first hourly run finishes about :04 after the hour, and this is sometimes too early for the feed to have populated with enough data; the maps will look bare sometimes. So I re-run at about :15 after the hour, and usually there is enough data that the map will look "normal" and full of stations.
Only the current hour is plotted. There are no archived images saved. I recently (July '25) switched these from Gempak to Python and MetPy, but there are still some bugs to iron out. For example, I've noticed that the symbols occasionally don't match the original report.
Radar Imagery (Right Column)
Each of the two radar sections uses a distinct source.
"Regional Radar Filtered"
This section uses the MRMS Composite Reflectivity data produced by NOAA/NSSL. It is quality controlled, and it also has most of the non-meteorological returns removed. Most of the images are created in Python and with a black background, but I do produce two images in GrADS with a white background. The data are available thanks to NOAA's agreement with the AWS (Amazon) Data Exchange.
I plot composite reflectivity on these maps, although base reflectivity would potentially be a more accurate representation of the weather here at the ground. I do that mainly because for large domains when you are zoomed out quite far, it's difficult to see the bright pixels of intense individual storms. This helps them stick out a bit.
Instead of labeling the color bar with the original units of dBZ, on these images I've used a written scale that roughly corresponds to my interpretation of that dBZ value. The two GrADS images also have an hourly rainfall estimate scale, corresponding to the standard Marshall-Palmer conversion.
In August '25 I switched the publicly available set of images from Gempak to Python, although I still produce the Gempak ones. Gempak plots county lines **much** faster, so I may put some of those back up at some point.
"Regional Radar Unfiltered"
This section uses the DHR Digital Hybrid Reflectivity national mosaic. The data are received from a Unidata feed, which unfortunately (in summer 2025) has become increasingly unstable -- hence this is now a legacy source for the page. This mosaic is unfiltered, and so substantial ground clutter is always present. That's good for detecting biological scatters, however, so even with the frequent outages I'll keep this data on the page for now.
This section preserves the original dBZ units of the data. Each page is a JavaScript loop that uses the HTML5 AnimationS app (HAniS) from the University of Wisconsin and should auto-refresh (new images created every 15 minutes).
Forecast Model Output (Right Column)
The RAP and NAM sections are NCEP numerical weather prediction model output. The data are from NOAA and accessed thanks to their agreement with the AWS (Amazon) Data Exchange.
Current Hour RAP Fields
The 2-hour forecast ("F02") of the Rapid Refresh (RAP) model is used to represent the conditions of the "current hour." [reasoning to be added once I write it]
Almost all of the images are produced with GrADS, except for the isentropic charts, which are more easily produced in Gempak since it handles the pressure-to-theta coordinate transformations easily.
The most recent 24 hours are available, so users can cycle through the previous day for each field. No "future" data is plotted.
NAM Data
The North American Mesoscale (NAM) model output, at 40-km resolution (Grid 212), is plotted here. Maps are available every 6 hours (F00, F06, ...), up to the end of the model run at 84 hours. Only the 00Z and 12Z runs are plotted, and they overwrite the single viewing page.
Skew-T diagrams are plotted for selected cities. Also, I wrote a GrADS version of the classic "Weather" program written by Peter Neilley in the 1990's to print text output of some model fields for selected cities.
Hallway Display (Right Column)
This is the link to the hallway display we have on the fourth floor in our building on campus. The panels are not publicly changeable. That would be fun to implement in the future, though!
You're welcome to use this display wherever you have a large screen and want it consumed by weather. Or, I have a less intimidating four-panel display that I use on the projector at the start of class when I teach -- you're also welcome to link to that one whenever you'd like.
Additional Common Questions
What computing power does this take?
All of the data ingest, processing, and serving is done on a server that I pay for myself -- no resources from my work are used for this project (except for me logging in to fix something, while sitting at my desk at work sometimes). That's why you don't see any mention of my employer anywhere on these pages.
The server is hosted by Ionos; I've been with them since 2003 -- the days of "1and1" as a hosting provider. There isn't really that much data here, and there are no SQL database pulls or anything, just creating and serving images: so I only need a couple cores, a few gigabytes of RAM, and 50-60 gigabytes of space to produce everything efficiently.
What programming skill does something like this take?
The primary programming and scripting skills I use to maintain the pages are:
- Unix command line fluency (a skill just about everyone needs)
- Shell scripts (a skill just about everyone needs)
- Cron jobs (a skill just about everyone needs)
- Python ("de rigueur" in meteorology)
- GrADS (old, but still works great)
- Gempak (old, but still works great)
Some of my colleagues make fun of me for still using GrADS and Gempak but my philosophy is: use the tool that makes the job the easiest and does it the most efficiently. If that's an old tool, then so be it.
I learned Gempak as an undergraduate, and so have written scripts in it for two decades now. The user base is dwindling, but those of us who have stuck around love its ease of use and command-line feel. There are also some very passionate, very vocal folks who have continued to maintain the package after Unidata dropped support for it.
GrADS is what my advisor preferred in graduate school and so I worked very deeply with it for several years. For the record, one of the best and most popular weather data websites in the world has all its images developed in GrADS, too. I'll let you try and figure out which one.
When did you start doing this?
Some of my model scripts go back to 2004 and honestly haven't changed much since then. A grad school colleague, Chris H., shared one of his scripts to calculate several of the model fields and that was a huge help in getting started. Once you have your workflow figured out, and you know what colors and regions and data you want to use, the scripts don't need much changing.