Best Practices

This guide outlines best practices for creating and using effective data visualizations across your agency. It follows a simple four-part framework that spans planning and design through evaluation and long-term adoption. Together, these sections help agencies develop visualizations that are clear, practical, and sustainable over time.

1
Preparing for Success
Effective visualizations begin with clear communication goals and well-prepared data. This section helps you clarify your message and ensure your data is ready to support it.
2
Creating Your Visual
Strong design makes data easier to understand. This section explores how to select appropriate visual formats and apply design, accessibility, and interactivity principles to effectively reach diverse audiences.
3
Assessing What Works
Strong visualizations are not static. This section explains how to gather feedback, measure effectiveness, and refine visuals to better meet audience needs.
4
Implementing Across the Agency
Successful visualization requires more than good design. This section highlights the importance of training, leadership support, and integrating visualization practices into everyday agency workflows.
1
Preparing for Success

Three Pillars of Preparation

Before a single chart is drawn, three foundational questions need answers: What are you trying to communicate? What tools will serve your data and audience? And is your data ready? This guide addresses each in sequence.

01
Define Clear Objectives
02
Select the Right Tools
03
Prepare Your Data

Defining Clear Objectives

A visualization without a purpose is decoration. Every visual should connect to a specific agency goal—informing policy, engaging the public, or improving communication across departments.

Know Your Audience

A policymaker needs high-level summaries with clear takeaways. A technical analyst needs precision and detail. Developing audience personas—mapping out roles, knowledge levels, and expectations—ensures visuals land the way they’re intended to.

What Is Your Message?

Every visualization falls into one of four categories. Identifying yours early shapes every design decision that follows.

Inform

Share facts, status updates, or trends with a general audience.

Persuade

Convince the audience of a priority, need, or course of action.

Monitor & Evaluate

Track progress toward goals or assess outcomes over time.

Explore

Enable open-ended analysis, discovery, and insight generation.

Message & Audience Workflow

With message type and audience identified, this workflow guides you from core question to the most effective visual format and framing.

Message and audience workflow diagram

Figure: Message & Audience Workflow — from key message to impactful framing.

Data Storytelling

The most effective visualizations tell a story. Framing data around cause-and-effect or trends over time helps viewers grasp meaning quickly—and act on it. Titles, labels, and annotations reinforce the message. Color and layout direct the eye.

Actionable narrative example

Figure: A side-by-side comparison of passive vs. directed visual storytelling.

What Makes a Narrative Actionable?

An actionable narrative leaves little room for ambiguity. It guides the viewer toward a specific conclusion or decision by using a clear central question, visual hierarchy that supports the message, and annotations that explain rather than just label.

Integrating AI into Visualization Workflows

AI tools can be genuinely powerful—but their value depends entirely on the type of task. The framework below helps practitioners calibrate when and how to use them.

Challenge TypeWhat It Looks LikeAI Suitability
Reasoning ChallengesComplex analytical thinking: designing measures, evaluating tradeoffs, interpreting interacting variablesModerate–High AI supports analysis; practitioner validates
Effort ChallengesHigh-volume, repetitive work: cleaning datasets, generating draft charts, standardizing fieldsVery High AI as primary automation engine
Coordination ChallengesAligning across departments, tracking inputs, reconciling feedbackModerate AI documents and organizes; humans decide
Domain ExpertiseApplying lived experience: stakeholder context, policy framing, visual judgmentLow–Moderate Reference support only; no substitute for judgment
Ambiguity ChallengesVisualization objective is unclear; the right question hasn’t been defined yetModerate AI prototypes options; humans finalize direction
Judgment / CourageSensitive findings, equity outcomes, politically charged resultsVery Low Advisory only; leadership must own these decisions

The AI Fluency Map

Even when AI is well-suited to a task, outcomes depend on how effectively the practitioner engages it. These six competencies define what fluent AI use looks like in practice.

Inputs

Prompt Design & Context Framing

Provide scope, constraints, and examples—not just a task request. Embed frameworks and specify tone, length, and format.

Strengthen: Front-load context. Show examples. Define success.
Oversight & Literacy

Technical Understanding

Know the difference between pattern prediction and fact retrieval. Recognize training cutoffs and the potential for hallucinations.

Strengthen: Learn failure modes. Verify high-stakes facts.
Integration

Workflow Design & Integration

Embed AI intentionally into day-to-day processes. Provide documents, scope, and defined asks—similar to onboarding a new team member.

Strengthen: Build repeatable workflows. Decompose complex tasks.
Inputs

Advanced Prompting Techniques

Use structured methods—few-shot examples, staged reasoning, multi-step instructions—to improve output quality and control.

Strengthen: Ask AI to reason step-by-step. Separate analysis from formatting.
Oversight & Literacy

Critical Evaluation & Verification

Check all outputs for accuracy, credibility, and fitness for purpose. Flag unsupported assertions; cross-reference key sources.

Strengthen: Use QA/QC checklists. Increase scrutiny outside core expertise.
Integration

Managing Expertise “Flattening”

Prevent AI from producing technically correct but generic outputs. Ensure results reflect real tradeoffs, stakeholder context, and institutional realities.

Strengthen: Embed practitioner tensions. Require nuance, not summaries.

Preparing Data for Visualization

The quality of a visualization depends on the quality of the data behind it. Preparation means making data accurate, well-structured, and aligned with the analysis objective—before any visual is built.

1
Select a Tool
2
Explore the Data
3
Clean the Data
4
Transform the Data

Selecting Your Tool

The right platform works with your data format, supports your publishing needs, meets accessibility requirements, and is something your team can maintain. Key considerations include data connectivity, ease of use, interactivity, and support for automated refresh in dashboards that update frequently.

Becoming Familiar with Your Data

Before cleaning anything, explore. Numeric summaries give a quick picture—but visual exploration reveals what numbers alone cannot: patterns, clusters, outliers, and nonlinear relationships.

Why Both Matter

Anscombe’s Quartet and the Datasaurus Dozen are classic demonstrations of this principle. Multiple datasets can share nearly identical descriptive statistics while looking completely different when visualized. Always pair numeric and visual exploration—they are complementary, not interchangeable.

Anscombe's Quartet

Figure: Anscombe’s Quartet—four datasets with identical statistics, four very different patterns.

Datasaurus Dozen

Figure: Datasaurus Dozen—further proof that numeric summaries can hide dramatic visual differences.

Cleaning the Data

Unclean data produces distorted insights—and no amount of design polish can fix a flawed dataset. Six principles guide effective cleaning:

Validity

Data meets predefined criteria. Traffic counts should be positive whole numbers, not “abc.”

Accuracy

Data reflects reality—not just a correctly formatted number, but the right number.

Completeness

No critical fields are blank. Missing context undermines the whole analysis.

Consistency

Data aligns logically. Zero cyclists but “highly congested” is a red flag.

Uniqueness

No duplicate records. Counting the same vehicle twice inflates every metric.

Uniformity

One unit of measurement throughout. Miles and kilometers cannot coexist.

Best Practice

Define your method for handling outliers before you begin—not after you see results. Removing values only because they’re inconvenient introduces bias.

Transforming Data for Visualization

Once data is clean, it needs to be structured so visualization tools can use it. Most platforms expect tabular data: one row per record, one value per cell, consistent column names, no totals mixed in.

Non-tabular vs tabular data

Figure: Non-tabular vs. tabular data—the right structure makes a dataset ready for any visualization tool.

Getting to this structure typically requires one or more transformation methods. The examples below use bicycle traffic counts to keep the logic concrete.

Wide Format

Each variable gets its own column; each row is a unique entity. Ideal for comparing across variables in a single row.

Wide format example

Long Format

One column holds variable names; another holds values. Each row is a single observation. This is the format most visualization libraries and tools expect for charts like line graphs and grouped bars.

Long format example

Aggregation

Combine values using sum, mean, or count to surface broader trends. Instead of daily counts per street, show monthly averages.

Aggregation example

Other Common Methods

Transposing swaps rows and columns—useful as an intermediate step when restructuring orientation.

Derived Metrics create new columns from existing data: percentage change, weekly totals, ratios.

Binning groups continuous values into categories (Low / Medium / High), simplifying distribution analysis.

2
Creating Your Visual

Matching Chart Type to Communication Goal

Selecting the right visual representation starts with one question: what are you trying to communicate? Each goal calls for a distinct type of chart. The selector below maps five core communication goals to appropriate visual forms.

Visualization type selection chart

Figure: Visualization Type Selector—start with your communication goal, then follow the path to the right chart.

▮▮
Comparison
How categories differ
Composition
Parts of a whole
Distribution
Spread and frequency
Relationship
Correlations between variables
Hierarchy
Levels and dependencies

Comparison

Bar and column charts let viewers evaluate differences between categories at a glance. A clustered bar chart, for instance, makes it easy to spot disparities in traffic volumes across regions.

Comparison chart examples

Composition

Pie, donut, and stacked bar charts show how parts contribute to a whole. Use these when one metric dominates—like funding allocation across projects. Limit segments and reserve pie charts for simple two-way splits.

Composition chart examples

Distribution

Histograms, box plots, and density plots reveal how data is spread—clusters, outliers, and frequency patterns that can directly inform policy or resource allocation decisions.

Distribution chart examples

Relationship

Scatter plots and bubble charts expose correlations, clusters, and outliers between variables. Bubble charts add a third dimension through size—useful for multi-factor analyses like speed, efficiency, and volume.

Relationship chart examples

Hierarchy

Tree diagrams, sunburst charts, and org charts reveal levels of importance and dependency. Use them to show complex structures like decision-making layers or project workflows.

Hierarchy chart examples

Color Selection

Color is one of the most powerful tools in a designer’s kit—and one of the easiest to misuse. Before choosing a palette, ask: what needs emphasis, what needs to stay in the background, and what comparisons must be visible? Always match palette type to your data.

Sequential color palette

Sequential

For values that increase or decrease in order. One hue, varying in lightness.

Diverging color palette

Diverging

For data with a meaningful midpoint, like above/below a target threshold.

Categorical color palette

Categorical

For comparisons across unrelated groups. Distinct hues with similar visual weight.

The 60/30/10 Rule

A practical framework for distributing color so the visual feels balanced and guides the viewer’s eye naturally.

60% — Base Color

A neutral or muted anchor: background, gridlines, axis labels, non-essential elements. Provides visual calm.

30% — Secondary Color

Distinguishes secondary data series. Should complement the base and maintain clear contrast without competing.

10% — Accent Color

Used sparingly to draw attention to the key insight—the most important trend, data point, or comparison. Its rarity is what makes it work.

Reducing Visual Noise

Effective visualizations prioritize signal over decoration. Excess gridlines, heavy borders, and unnecessary shading compete with the data for attention. The guiding principle is simple: if an element doesn’t help the reader understand the data, remove it.

Consistent Alignment

Misaligned titles and uneven margins create subtle friction that makes charts feel cluttered even when content is minimal.

Intentional Whitespace

Generous margins and spacing between elements reduce cognitive load. Crowding creates visual fatigue.

Simplified Color Use

Avoid a different color for every category if the message only requires highlighting one or two key elements.

No Decorative Imagery

Photos and icons should support the message, not fill space. If it doesn’t help the reader, leave it out.

Accessibility

About 10% of the population experiences some form of color blindness. Designing accessible visuals isn’t just good practice—it ensures your data reaches the full audience, including those using assistive technologies or printing in grayscale.

Core Principle

Never rely on color alone to convey meaning. Pair color with patterns, line styles, shapes, or direct labels so that meaning survives in grayscale, on low-resolution screens, and for users with color vision deficiencies.

Common Color Vision Deficiencies

Deuteranopia (Green-blind)

Difficulty distinguishing greens from reds. One of the most common types—affects roughly 6% of males.

Protanopia (Red-blind)

Red tones appear darker or muted, making red-green combinations unreliable for conveying contrast.

Tritanopia (Blue-yellow blind)

Less common; affects ability to distinguish blues from greens and yellows from violets.

Grayscale / Print

Many users print reports or view them on monochrome displays. Always test by converting to grayscale before finalizing.

Recommended Practices

Use Blue & Yellow

One of the most consistently distinguishable pairs across all major types of color vision deficiency.

Check Contrast Ratios

Aim for WCAG compliance: 4.5:1 minimum for body text and graphical elements conveying meaning.

Add Patterns & Textures

Stripes, dots, dashes, and shapes reinforce distinctions that color alone cannot guarantee.

Use Direct Labels

Placing labels directly on data elements reduces reliance on color legends and speeds comprehension.

Test in Grayscale

Convert your chart to grayscale before finalizing. If categories become indistinguishable, adjust brightness, line weight, or patterning—don’t rely on hue alone.

Alt Text and Document Tagging

Color-safe palettes help sighted users, but accessible visuals also need to work for screen reader users. Two practices make the difference:

Alternative Text

Write alt text that conveys the key insight, not a description of visual appearance. Ask: what should the reader take away from this chart? If it’s decorative, mark it as such.

Document Tagging

Tagged PDFs define reading order, headings, and figure roles so assistive technologies can present content coherently. Without tags, even well-written alt text may fail to reach the user in context.

For comprehensive, tool-specific accessibility guidance, Section508.gov serves as the central federal resource.

Font Selection and Typography

Typography shapes how quickly and accurately readers absorb information. The right font choices establish hierarchy, support accessibility, and keep the reader’s focus on the data—not the design.

Sans Serif for Digital; Serif for Print

For dashboards, infographics, presentations, and web content, sans serif fonts are preferred. They render more cleanly on screens at small sizes. Serif fonts perform better in long-form printed reports but can muddle chart labels and annotations at small sizes or low resolutions.

Build a Clear Typographic Hierarchy

Variation in size, weight, and color guides the viewer through information in the intended order. A consistent hierarchy makes data-dense visuals scannable.

Title
Visualization Title
Subtitle
Key takeaway or supporting context
Axis Labels
Consistent, descriptive, not overpowering
Data Labels
Used sparingly to reinforce key message
Footnotes
Source, methodology notes, caveats

Keep Fonts Minimal

Limit to one or two font families. Use bold or semibold for emphasis; italics sparingly for annotations. Consistency builds recognition across multi-page products.

Minimum Size & Contrast

12pt minimum for print body text; 9–10pt equivalent for digital graphics. Strong contrast between text and background is required for Section 508 compliance.

Avoid Thin Weights

Ultra-light font weights reduce legibility on screens. Ensure adequate line height (~1.2–1.4) and sufficient spacing between elements.

Embed Fonts in PDFs

Always verify that fonts embed correctly when exporting to PDF—screen readers depend on embedded font data to interpret text accurately.

Incorporating Interactive Elements

Interactive features allow users to explore data at their own depth—moving from high-level summaries to granular details. Done well, they increase engagement, improve comprehension, and let different audiences get what they need from the same visual.

Interactive dashboard elements example

Drill-Downs

Let users navigate hierarchies—from year to quarter to month, or country to state to city. Use dynamic titles and visual cues like arrows to guide interaction. Tooltips provide additional context at each level.

Filters

Enable users to refine views by time, geography, or category. Always show which filters are active, synchronize filters across related visuals, and provide a clear “Reset” option.

Tooltips

Deliver on-demand detail without cluttering the main visual. Keep content short, provide methodological context where useful, and position tooltips so they never obscure key data.

Mapping and Geospatial Visuals

Maps are among the most intuitive tools for communicating spatial data. A simple crash location map can immediately surface problem areas that a spreadsheet would never reveal. More advanced geospatial applications can analyze service gaps, demographic impacts, and corridor performance—but all maps depend on thoughtful choices around type, projection, and scale.

Choosing the Right Map Type

The choice of map type depends on what relationship you’re trying to show. Work through the selector below to find the right starting point for your data.

Map type selector

Figure: Map Type Selector—answer a few questions about your data to find the right visual approach.

Map Projections

Every map projection distorts the 3D Earth to fit a flat surface. The wrong projection can misrepresent geographic size and spatial relationships significantly. The four views below show the same U.S. geography projected differently—each creating a different impression of scale and proportion.

US projected four different ways
Map projection selection guide

Figure: Recommended projection types by map extent—local, national, or specialized corridor.

Use epsg.io to find specific projection codes suited to your geographic area.

Map Scale

Scale determines how much detail is visible and how large an area is shown. Match scale to the questions your audience needs to answer.

High-Detail Data

Use large-scale maps (zoomed in) for street-level crash data or intersection analysis.

Broad Overview

Use small-scale maps (zoomed out) for statewide corridors or regional trends.

Both at Once

Combine a regional overview with large-scale pop-out insets for areas of interest.

Interactive Maps

Provide zoom and filter controls so users can explore across scales on their own.

Real-Time Data

Adding live data feeds to maps—traffic congestion, accident alerts, transit operations—significantly enhances decision-making in dynamic contexts like traffic management and emergency response. Ensure refresh rates and data latency are matched to the use case.

3
Assessing What Works

The Cycle of Continuous Improvement

A strong visualization isn’t finished when it’s published—it’s finished when it’s working. That requires benchmarks for success (clarity, accuracy, and audience engagement) and a structured cycle for gathering feedback, identifying improvements, and measuring results.

Cycle of continuous improvement diagram

Figure: Cycle of Continuous Improvement.

The Insights Hub

Feedback flows from two directions: internal sources like team workshops and stakeholder reviews, and external sources like user surveys and usability testing. An Insights Hub consolidates these into one place—making it possible to analyze patterns, spot recurring gaps, and act on what’s learned.

Consolidating feedback this way keeps improvements aligned with both organizational goals and audience needs, rather than responding to individual complaints in isolation.

Insights hub feedback sources diagram

Figure: Feedback sources feeding into the Insights Hub.

Visualization Evaluation Matrix

This matrix provides a consistent framework for assessing any visualization across three dimensions: clarity, accuracy, and audience engagement. Score each criterion from 1 to 3, then use the results to pinpoint where to focus improvement efforts.

Dimension
Criteria
Guiding Questions
3 — Strong
2 — Needs Work
1 — Significant Issues
Clarity
Purpose Alignment
Does the visualization convey its message clearly? Is the takeaway immediately apparent? Are annotations and labels clear?
Purpose clear at first glance
Requires effort to understand
Purpose unclear or misleading
Design Simplicity
Is the visualization free from clutter? Are color, font, and chart type choices consistent and easy to read?
Clean design enhances comprehension
Minor distractions present
Overly complex or overwhelming
Accuracy
Data Integrity
Is the data accurately represented without distortions? Are sources cited?
Precise, clearly sourced data
Minor ambiguities or inaccuracies
Significant inaccuracies or opacity
Context & Comparisons
Does the visualization provide sufficient context (benchmarks, trends)? Are comparisons fair and relevant?
Robust context supports decisions
Context present but incomplete
Lacks context for interpretation
Audience Engagement
Relevance to Audience
Is the visualization tailored to the audience’s knowledge and needs? Does it address stakeholder questions?
Fully aligned with audience expectations
Partially aligned; some gaps
Misaligned with audience needs
Interactivity & Accessibility
Are interactive features intuitive? Are insights accessible to diverse audiences, including colorblind viewers?
Highly engaging and accessible
Moderate engagement; some accessibility issues
Limited engagement and accessibility

Common Fixes

Start With the End in Mind

Plan for the questions you expect stakeholders to ask of the data before building the visual.

Design From the Ground Up

Begin with the simplest possible visual and only add elements—icons, colors, labels—that serve a specific purpose.

Sort and Order Logically

Use frequency, value, group, or alphabetical ordering to make patterns immediately readable.

Annotate for Clarity

Add annotations and data labels to draw attention to the key insight and reduce interpretive burden on the viewer.

Staying Relevant Over Time

Audience needs evolve. Organizational priorities shift. A feedback loop that incorporates user insights into future visualization practices is what keeps visuals from going stale. This means regular reviews, fresh feedback rounds, and incremental updates based on what’s learned.

Gather feedback icon
Step 1
Gather Feedback
Analyze icon
Step 2
Analyze Strengths & Gaps
Implement icon
Step 3
Implement Refinements
Evaluate icon
Step 4
Evaluate Against Benchmarks
4
Implementing Across the Agency

Common Barriers and How to Overcome Them

Organizational inertia is one of the biggest obstacles to adopting visualization practices. The graphic below captures the most common challenges agencies face—and the solutions that move them forward.

Common organizational barriers and solutions

Figure: Common Organizational Barriers and Solutions.

🏠

Siloed Departments

Teams work in isolation, limiting collaboration and comprehensive visualizations.
Form cross-departmental teams—planning, IT, and communications together—to ensure visuals are technically sound and broadly aligned.
🔒

Resistance to New Tools

Unfamiliarity or skepticism about the value of visualization leads to slow adoption.
Engage staff early, share tangible success stories, and provide training that demystifies the tools and their benefits.

Limited Capacity

Competing priorities leave insufficient time and resources for visualization work.
Build momentum with small wins, and make the case through concrete examples of improved decision-making and public engagement.

How Leaders Champion Visualization

Cultural change starts at the top. When leaders incorporate visualizations into their own reports and meetings, it signals to the organization that these tools are essential—not optional. The four-step framework below outlines how leadership can build and sustain momentum.

Leadership four-step framework

Figure: Four-step framework for leadership championing of visualization practices.

Step 1

Set the Tone

Model the use of visualizations in meetings, decisions, and reports to signal their importance.

Step 2

Show Alignment

Connect visualizations to strategic goals—transparency, public trust, performance metrics—and reference peer agency examples.

Step 3

Build Momentum

Highlight internal success stories to inspire teams and create a positive feedback loop for adoption.

Step 4

Commit to Action

Allocate resources, tools, and training—embedding visualization into workflows rather than treating it as a side project.

Embedding Visuals into Daily Workflows

Visualizations become a cultural norm when they’re woven into how an agency operates day-to-day—internal reporting, external communication, public engagement. Making them expected and routine is what moves them from occasional outputs to organizational standards.

Integrate celebrate showcase success cycle

Figure: Four-step cycle for integrating visualization into agency workflows.

1

Integrate

Embed visuals into regular internal reports and public communications to establish them as essential tools.

2

Celebrate

Recognize successful visualization efforts in newsletters and team meetings to motivate continued innovation.

3

Showcase

Share visuals publicly to demonstrate the agency’s commitment to transparency and build public trust.

4

Sustain

Use momentum from effective visuals to drive ongoing improvements and continued investment.

Training and Professional Development

A capable visualization workforce starts with targeted training matched to different roles. Entry-level staff need foundations—chart selection, data preparation. Advanced practitioners need tools for interactivity and mapping. The process below guides agencies from skills assessment through ongoing curriculum development.

Training development process flowchart

Figure: Training development process—from identifying gaps through continuous curriculum refinement.

Six Areas of Expertise

A well-rounded visualization program addresses six complementary domains. Together they cover the full pipeline from raw data to published, accessible output.

Six expertise areas part 1 Six expertise areas part 2

Figures: Six key expertise areas for visualization training and hiring.

The Knowledge Hub

To sustain capability over time, agencies should build an internal knowledge hub—a shared home for templates, guidelines, best practices, and reusable resources. It reduces the learning curve for new staff and keeps institutional knowledge from walking out the door.

Knowledge hub components

Figure: Key components of a data visualization knowledge hub.

Ready-Made Resources

Esri’s Living Atlas, publicly accessible datasets, and plug-and-play dashboard apps provide strong starting points—letting teams focus on interpretation and communication rather than technical setup.

Resources by Expertise Area

The following freely available resources are organized by the six expertise areas. Links may change over time as web content evolves.

Data Wrangling

Preparing, cleaning & transforming data

Data Synthesis

Translating data into clear narratives
Tools: PowerPoint, Adobe Creative Suite, ArcGIS StoryMaps, Tableau, Power BI

Infographics

Static visuals for reports & public comms
Tools: Adobe Illustrator, InDesign, Canva, PowerPoint

Dashboards

Interactive performance monitoring tools

Mapping

Spatial visualization & geospatial storytelling
Tools: ArcGIS Pro, ArcGIS Online, ArcGIS Experience Builder, QGIS

Quality Control & Governance

Consistency, accessibility & sustainability
Tools: SharePoint, Tableau Server, Power BI Service, ArcGIS Online