Searching Evidence
MUSE includes a curated database of peer-reviewed research evidence. The Evidence Search page lets you browse, filter, and read the studies that underpin logic model connections. Whether you're looking for evidence to support a specific causal claim or simply exploring what the research says about your topic area, this is your starting point.
Getting to Evidence Search
Click "Evidence" in the top navigation bar to open the Evidence Search page at muse.beaconlabs.io/search.
You'll see a search bar at the top and a grid of evidence cards below, showing all available research.
Keyword Search
Type any word or phrase into the search bar to filter the evidence by content. The search looks across:
- Titles of the research papers
- Content of the evidence summaries
As you type, the results update in real time. The result count at the top of the grid ("X of Y") updates to show how many studies match your current search.
Search tips:
- Use specific terms related to your intervention area (e.g., "malaria", "microfinance", "digital literacy")
- Try searching for the outcome you care about (e.g., "employment", "school attendance", "maternal mortality")
- If you get too few results, try a broader or alternative term
Filtering by Effect Type
The Effect filter lets you narrow results by the type of outcome the research found. You can select multiple effect types at once.
| Effect Type | What It Means |
|---|---|
| Positive Effect | The expected effect was found and is statistically significant — the intervention worked as intended |
| No Effect | The expected effect was not observed — the intervention did not produce the hoped-for change |
| Mixed Effect | Results varied depending on context, population, or conditions — it worked in some cases but not others |
| Side Effect | An unintended effect was observed — something unexpected happened, positive or negative |
| Unclear | The data or methods were insufficient to draw a clear conclusion |
Studies that found no effect are just as important as those that found positive effects. Knowing what doesn't work — or under what conditions something doesn't work — is essential for designing better programs and avoiding wasted resources.
Filtering by Strength of Evidence
The Strength filter lets you filter by the quality and rigor of the research methodology. MUSE uses the Maryland Scientific Methods Scale (SMS), a 0–5 star rating system widely used in evidence-based practice.
| Level | Stars | What It Means |
|---|---|---|
| Level 0 | No stars | Mathematical models or theoretical analyses — no empirical data |
| Level 1 | 1 star | Simple before-and-after comparison with no control group |
| Level 2 | 2 stars | Comparison between two groups, but not randomly assigned |
| Level 3 | 3 stars | Comparison with a control group using statistical controls for confounders |
| Level 4 | 4 stars | Quasi-experimental design with strong controls (e.g., difference-in-differences) |
| Level 5 | 5 stars | Randomized Controlled Trial (RCT) — the gold standard of evidence |
You can select one or more strength levels to filter. For example, if you want only the most rigorous evidence, select Level 4 and Level 5.
Higher-strength evidence is more rigorous, but lower-strength evidence is not worthless. For emerging program areas or under-researched populations, a well-designed Level 2 or Level 3 study may be the best available evidence. Consider the strength in context, not in isolation.
Reading the Evidence Cards
Each result appears as a card in the grid. Here's what you'll see on each card:
- Title — The name of the research paper or evidence entry
- Author — Who produced the research
- Strength stars — The quality rating (0–5 stars)
- Key results — A short summary of what the research found
- Effect type icon — A visual indicator of whether the effect was positive, negative, mixed, etc.
- Tags — Topic categories that describe the evidence (e.g., "health", "education", "economic development")
- Intervention → Outcome pairs — Each card may show one or more results, each showing what intervention was studied and what outcome was measured
Viewing Full Evidence
Click on any evidence card to open the full Evidence Detail page for that study. There you'll find the complete research summary, methodology details, data sources, citations, and blockchain attestation records.
When MUSE's AI generates a logic model, it searches this same database and automatically links relevant evidence to causal connections. Green arrows on your canvas mean the AI found a matching evidence card for that specific link. You can use the Evidence Search page to explore that evidence or find additional studies to support your model.
Using the Result Count
At the top of the results grid, you'll see a count like "12 of 47 results". This tells you:
- 47 — Total number of evidence entries in the database
- 12 — Number of entries that match your current search and filter combination
If the count drops to zero, try removing some filters or broadening your search terms.