Searching Evidence
MUSE includes a curated database of peer-reviewed research evidence. The Evidence Search page lets you browse, filter, and read the studies that underpin logic model connections. Whether you're looking for evidence to support a specific causal claim or simply exploring what the research says about your topic area, this is your starting point.
Getting to Evidence Search
Click "Evidence" in the top navigation bar to open the Evidence Search page at muse.beaconlabs.io/search.
You'll see a search bar at the top and a grid of evidence cards below, showing all available research.
Keyword Search
Type any word or phrase into the search bar to filter evidence by title. As you type, the results update in real time. The result count at the top of the grid updates to show how many studies match your current search.
Search tips:
- Use specific terms related to your intervention area (e.g., "malaria", "microfinance", "digital literacy")
- Try searching for the outcome you care about (e.g., "employment", "school attendance", "maternal mortality")
- If you get too few results, try a broader or alternative term
Filtering by Effect Type
The Effect filter lets you narrow results by the type of outcome the research found. You can select multiple effect types at once.
| Effect Type | What It Means |
|---|---|
| Positive Effect | The expected effect was found and is statistically significant — the intervention worked as intended |
| No Effect | The expected effect was not observed — the intervention did not produce the hoped-for change |
| Mixed Effect | Results varied depending on context, population, or conditions — it worked in some cases but not others |
| Side Effect | An unintended effect was observed — something unexpected happened, positive or negative |
| Unclear | The data or methods were insufficient to draw a clear conclusion |
Studies that found no effect are just as important as those that found positive effects. Knowing what doesn't work — or under what conditions something doesn't work — is essential for designing better programs and avoiding wasted resources.
Filtering by Strength of Evidence
The Strength filter lets you filter by the quality and rigor of the research methodology. MUSE uses the Maryland Scientific Methods Scale (SMS), a 0–5 star rating system widely used in evidence-based practice.
| Level | Filter Label | Stars | What It Means |
|---|---|---|---|
| Level 0 | Mathematical Model | No stars | Mathematical models or theoretical analyses — no empirical data |
| Level 1 | Basic Comparison | 1 star | Simple before-and-after comparison with no control group |
| Level 2 | Controlled Comparison | 2 stars | Comparison between two groups, but not randomly assigned |
| Level 3 | Quasi-experimental | 3 stars | Comparison with a control group using statistical controls for confounders |
| Level 4 | Randomized Design | 4 stars | Quasi-experimental design with strong controls (e.g., difference-in-differences) |
| Level 5 | RCT | 5 stars | Randomized Controlled Trial (RCT) — the gold standard of evidence |
You can select one or more strength levels to filter. For example, if you want only the most rigorous evidence, select Level 4 and Level 5.
Higher-strength evidence is more rigorous, but lower-strength evidence is not worthless. For emerging program areas or under-researched populations, a well-designed Level 2 or Level 3 study may be the best available evidence. Consider the strength in context, not in isolation.
Reading the Evidence Cards
Each result appears as a card in the grid. Here's what you'll see on each card:
- Title — The name of the research paper or evidence entry (clickable to open the detail page)
- Author — Who produced the research
- Publication date and Strength stars — When it was published and the quality rating (0–5 stars)
- Results — Up to 2 structured Intervention → Outcome pairs, each with an effect type icon (positive, no effect, mixed, side effect, or unclear). If there are more than 2 results, a "+X more results" indicator is shown
- Tags — Topic categories that describe the evidence (e.g., "health", "education", "economic development")
Viewing Full Evidence
Click on any evidence card to open the full Evidence Detail page for that study. There you'll find the complete research summary, methodology details, data sources, citations, and blockchain attestation records.
When MUSE's AI generates a logic model, it searches this same database and automatically links relevant evidence to causal connections. Green arrows on your canvas mean the AI found a matching evidence card for that specific link. If external paper search was enabled during generation, academic papers from Semantic Scholar may also appear alongside curated evidence when you click a green arrow. You can use the Evidence Search page to explore curated evidence or find additional studies to support your model.
Using the Result Count
At the top of the results grid, you'll see a count like "12 of 47 results". This tells you:
- 47 — Total number of evidence entries in the database
- 12 — Number of entries that match your current search and filter combination
When filters are active, the count also shows how many filters are applied (e.g., "2 filters active"). You can click "Clear all filters" to reset all search and filter criteria at once.
If the count drops to zero, try removing some filters or broadening your search terms.