Skip to main content

8 Grant Proposal Mistakes That Produce Rejections and Compliance Problems

Published: Last updated: Reviewed:

TLDR

A budget narrative that doesn't match the program narrative line-by-line is the single most common reason a technically strong proposal is scored down by federal reviewers — not because the program is weak, but because the budget signals that the applicant organization doesn't understand how the proposed activities will actually be implemented. These eight mistakes cause proposals to be rejected on technical grounds or to be funded in ways that create compliance problems the grants manager has to manage for the entire award period.

Federal grant reviewers score proposals against explicit criteria, and a budget that doesn’t reconcile with the program narrative is the most visible signal that an applicant organization has not thought through implementation. The eight mistakes below produce either outright rejections or awards structured in ways that your grants manager will spend the entire period of performance trying to manage around.

Mistake 1: Budget Narrative That Doesn’t Match the Program Narrative Line-by-Line

The mistake: Your program narrative describes a robust community outreach strategy involving three community health workers conducting door-to-door outreach across four neighborhoods. Your budget shows one part-time outreach coordinator at 0.5 FTE. The discrepancy is obvious to any reviewer who cross-references the two sections — which every federal reviewer is required to do.

Why it happens: The program narrative and the budget are written by different people on different timelines. The program director writes the narrative to be as compelling as possible. The finance director or grants manager builds the budget to fit the award ceiling. Neither goes back and reconciles the two documents before submission.

The consequence: Federal grant reviewers evaluate budget-program alignment as an explicit scoring criterion in most NOFA scoring rubrics — it typically appears under “organizational capacity” or “management plan.” A budget that cannot support the described program activities signals to reviewers that either the program won’t be implemented as described or the applicant organization doesn’t have realistic cost experience. Either conclusion results in a lower score. If the proposal is funded despite the discrepancy, the grants manager must manage a program where the approved budget doesn’t match what was promised to participants — a compliance problem that surfaces at every monitoring visit.

The fix: After both narrative and budget are drafted, require a line-by-line reconciliation: every person mentioned in the narrative must appear in the personnel budget; every activity described must have a corresponding cost line; every piece of equipment or supply mentioned must be in the non-personnel budget. This reconciliation step should be on your internal proposal review checklist and should be completed by someone other than the writer — a second reviewer who cross-references both documents specifically for alignment.


Mistake 2: Submitting Without a Logic Model, or With One That Doesn’t Map to Evaluation Measures

The mistake: Your proposal either omits a logic model entirely — because the RFP says it is “optional” — or submits a generic logic model from a previous application that lists outcomes which do not correspond to the evaluation measures described in your program narrative.

Why it happens: Logic models are perceived as a visual formality rather than a substantive program design tool. Development staff who are working against a submission deadline cut the logic model or reuse a prior one to save time.

The consequence: Federal program officers use the logic model to evaluate whether your program theory is internally consistent. When your logic model shows “improved long-term employment outcomes” as an outcome but your evaluation plan measures only placement rates (an output), the gap tells the reviewer that your evaluation plan cannot actually confirm whether your program produces the outcomes you claim. This type of inconsistency is a significant scoring liability in any program that requires a logic model, including most HHS, Department of Labor, and Department of Education competitive grants. For programs under the Evidence Act (P.L. 115-435), reviewers may also assess whether your evaluation design is capable of producing the evidence tier your application claims.

The fix: Build your logic model first — before writing the program narrative — and use it as the structural outline for both the narrative and the evaluation plan. Every outcome box in the logic model must appear as a named performance measure in the evaluation section, with a data source, measurement frequency, and target. If the RFP marks the logic model as optional, include it anyway: it signals program rigor and makes your proposal easier for reviewers to follow.


Mistake 3: Requesting Indirect Costs Without a Federally Negotiated Rate or a De Minimis Rate Election

The mistake: Your federal grant budget includes an indirect cost line calculated at “15% of direct costs” without any reference to a negotiated indirect cost rate agreement (NICRA), without documentation of a cognizant federal agency, and without a stated election of the de minimis rate under 2 CFR 200.414(f).

Why it happens: Indirect cost recovery feels like standard operating procedure, and the specific mechanism for claiming it — NICRA vs. de minimis election — is not obvious to development staff who are not deeply familiar with the Uniform Guidance.

The consequence: Under 2 CFR 200.414, indirect costs may only be charged to federal awards if the organization has either a current NICRA from its cognizant federal agency or has elected the 10% de minimis rate. A rate of 15% with no documentation is not a valid rate election. During a Single Audit or program close-out, indirect costs claimed at an undocumented rate are disallowed entirely — meaning you must return the indirect cost recovery for the entire award period. For a $500,000 award with 15% indirect claimed, that is a $75,000 disallowance. Additionally, the de minimis rate of 10% applies to Modified Total Direct Costs (MTDC), not total direct costs, so applying it to the wrong base overstates the allowable indirect recovery.

The fix: Before submitting any federal grant application, determine your indirect cost rate status: do you have a current NICRA, or are you electing the de minimis rate? If you are electing the de minimis rate, state this explicitly in the budget narrative: “Indirect costs are budgeted at 10% of Modified Total Direct Costs per 2 CFR 200.414(f). [Organization name] does not currently have a federally negotiated indirect cost rate agreement.” If you have a NICRA, include the rate, the base, and the agreement date. This is a compliance requirement, not a suggestion.


Mistake 4: Writing Evaluation Measures as Outputs Instead of Outcomes

The mistake: Your evaluation plan measures outputs — number of participants enrolled, number of training sessions conducted, number of meals served — and presents these as “program outcomes” in the proposal.

Why it happens: Outputs are easy to count and easy to project. Outcomes — measurable changes in the condition or behavior of program participants — require a measurement methodology that is harder to design and more uncertain to project.

The consequence: Federal funders, particularly at the Department of Health and Human Services, the Department of Education, and the Department of Labor, explicitly distinguish between outputs and outcomes in their NOFA scoring criteria. Proposals that present outputs as outcomes are scored lower on the evaluation plan criterion — typically a 10–20 point criterion in a 100-point scoring rubric. Reviewers note the distinction explicitly in their scoring comments. If funded, an output-focused evaluation plan also creates a grant management problem: you will have extensive data on program activity but no data on whether the program is working, which undermines your ability to write a competitive renewal application.

The fix: For every output measure in your evaluation plan, ask: “What change does this activity produce in participants?” That change is your outcome. Replace “number of job training sessions attended” with “percentage of participants employed in a position matching their training area within 90 days of completion.” Replace “number of meals served” with “percentage of households reporting food insecurity reduced from baseline at 6-month follow-up.” Every outcome measure must have a data source (participant survey, employer verification, state wage records), a measurement frequency, and a baseline or comparison group.


Mistake 5: Writing to Organizational Strengths Instead of the RFP’s Funding Priorities

The mistake: Your proposal narrative is organized around your organization’s history, track record, and program model — which are genuinely strong — but does not respond specifically to the funding priorities section of the Request for Proposals. The funder’s priority for a “trauma-informed, culturally responsive service model” appears nowhere in your narrative.

Why it happens: Organizations write from their existing program descriptions because starting from scratch takes time. The grant writer pastes in sections from a prior application and adjusts them for the new RFP rather than starting from the funder’s stated priorities.

The consequence: Federal and foundation reviewers score proposals against explicit criteria that mirror the funding priorities stated in the NOFA or RFP. If your narrative does not use the funder’s language and does not address their stated priorities in the order and weight the scoring rubric assigns them, reviewers cannot give you full credit for those criteria — even if your program actually meets the intent. A proposal that scores 85 out of 100 on an average competition does not receive funding; one that scores 88 does. The margin between funded and not-funded proposals is frequently two to five points.

The fix: Before drafting a single word of your narrative, print the RFP’s scoring criteria and assign a word count to each criterion proportional to its point value. A 15-point criterion in a 100-point rubric should receive approximately 15% of your narrative word count. Write each section of the narrative to directly answer the scoring criterion, using the funder’s exact language where possible. Complete this mapping exercise before you open a prior application for reference.


Mistake 6: Omitting Key Personnel Qualifications When the Funder Requires Them

The mistake: The RFP requires a list of key personnel with resumes and a description of each person’s qualifications for their role in the proposed project. Your application lists job titles and general position descriptions but does not attach individual resumes or describe how each person’s specific experience qualifies them for the specific project role.

Why it happens: Key personnel sections feel like HR formalities. Grant writers who are focused on the program narrative treat personnel documentation as an attachment to collect at the end of the application process — and then run out of time.

The consequence: Federal grant reviewers score organizational capacity and management plan sections in part based on the demonstrated qualifications of the individuals who will implement the project. A personnel section that lists a “Program Director (to be hired)” with no resume attached and no description of required qualifications scores at or near zero on the personnel criterion, even if the program narrative is excellent. Additionally, for federal awards that designate key personnel in the award agreement, replacing a key personnel member without prior approval from the grants officer is a compliance violation under 2 CFR 200.308(c)(2) — meaning the decision about who your key personnel are affects your compliance posture for the entire award period.

The fix: Identify your key personnel as early as possible in the application process — ideally before you begin writing the narrative. Write a qualification paragraph for each key person that maps their specific prior experience to the specific requirements of the proposed project. Attach current CVs or resumes. If a position is currently vacant, write the position description with sufficient specificity that a reviewer can assess whether the qualifications are credible and achievable, and include a hiring timeline.


Mistake 7: Requesting Budget Line Items That Are Unallowable Under the Funding Agency’s Cost Principles

The mistake: Your budget includes line items for a project kick-off celebration ($2,500), alcoholic beverages at a community engagement event ($800), a contingency reserve (5% of direct costs), and a contribution to your organization’s general operating fund ($10,000). None of these are flagged before submission.

Why it happens: Development staff who are not trained in the Uniform Guidance or the specific cost principles of the funding agency include budget items that seem reasonable from a program perspective without checking whether they are allowable under the applicable regulations.

The consequence: Under 2 CFR 200 Subpart E, alcoholic beverages (§200.423), entertainment costs without a specific program exception (§200.438), and contributions to contingency reserves without prior agency approval (§200.433) are explicitly unallowable on federal awards. Including these items in a federal grant budget either disqualifies the application during technical review or, if funded, creates a finding when the costs are incurred and reviewed during a Single Audit or monitoring visit. Disallowed costs must be returned to the federal agency regardless of when the error is identified.

The fix: Before finalizing any federal grant budget, run every line item against the applicable cost principles — 2 CFR 200 Subpart E for most federal awards, and the program-specific requirements in the OMB Compliance Supplement for major programs. Create a pre-submission budget review checklist that includes: unallowable costs check, indirect cost rate documentation, cost sharing commitment (if applicable), budget period alignment with program period, and equipment threshold review (equipment is defined as tangible property with per-unit cost above $5,000 under 2 CFR 200.439).


Mistake 8: Using a Budget That Builds In a 10% Contingency Without Labeling or Justifying It

The mistake: Your budget subtotals are accurate, but the budget total is 10% higher than the sum of the line items — because your grants manager added a 10% contingency to “cover unforeseen costs.” The budget narrative does not mention the contingency, and the surplus is distributed invisibly across line items.

Why it happens: Contingency budgets are standard practice in construction and project management. Grants staff without federal grant training apply the same logic to grant budgets without understanding the cost principles that restrict contingency charges.

The consequence: Under 2 CFR 200.433, contributions to a reserve or contingency fund are unallowable unless specifically approved by the federal awarding agency in writing. An unlabeled 10% contingency embedded in inflated line items is a different problem: it misrepresents the actual cost of the proposed activities, and if the award is made and the inflated costs are incurred, the excess over actual costs must be returned at close-out. Reviewers who catch the discrepancy between your narrative description of activities and your cost projections will score the budget down for lack of specificity and credibility — a penalty that applies whether or not the contingency is disclosed.

The fix: Build your grant budget from actual cost data — payroll records for salary rates, vendor quotes for equipment and supplies, GSA per diem rates for travel. If uncertainty genuinely exists for a specific line item, note it explicitly in the budget narrative: “Supplies costs are estimated based on prior year actuals; final costs may vary by ±5% depending on program enrollment.” Do not add a global contingency. If your program design has genuine uncertainty, build that uncertainty into your program narrative and evaluation plan — not your budget.

Free resource

Get the Nonprofit Grant Compliance Checklist

A practical checklist for post-award grant compliance: restricted funds, reporting cadence, audit prep, and common failure points. Delivered by email.

We'll email the resource and a short follow-up sequence. Unsubscribe any time.

Email is required because the download link is delivered by email, not on-page.

Q&A

What is the de minimis indirect cost rate and who can use it?

Under 2 CFR 200.414(f), any non-federal entity that has never received a federally negotiated indirect cost rate — or that has previously received a rate but chooses not to negotiate — may elect to use a de minimis indirect cost rate of 10% of modified total direct costs (MTDC). MTDC is defined in 2 CFR 200.1 and excludes equipment, capital expenditures, patient care charges, rent, tuition remission, and the portion of each subaward in excess of $25,000. The de minimis rate election must be applied consistently to all federal awards for which indirect costs are claimed and cannot be combined with a partial negotiated rate. The election is made on a per-award basis in the grant application budget. Organizations that have a cognizant federal agency must use their negotiated rate — they cannot elect the de minimis rate.

Q&A

What are unallowable costs under federal grant cost principles?

Unallowable costs under 2 CFR 200 Subpart E — the Uniform Guidance cost principles — include alcoholic beverages (§200.423), entertainment costs (§200.438), fines and penalties (§200.441), fundraising costs (§200.442), lobbying (§200.450), and contributions to a contingency reserve without prior written approval from the federal awarding agency (§200.433). Unallowable costs that appear in a grant budget are not merely scored down — they cause the application to be disqualified in many federal programs, and if they are funded and not identified until a Single Audit, they are disallowed and must be returned. The OMB Compliance Supplement, published annually, identifies the specific cost principles applicable to each federal program cluster.

Frequently asked

Frequently Asked Questions

What is the de minimis indirect cost rate and who can use it?
Under 2 CFR 200.414(f), any non-federal entity that has never received a federally negotiated indirect cost rate — or that has previously received a rate but chooses not to negotiate — may elect to use a de minimis indirect cost rate of 10% of modified total direct costs (MTDC). MTDC is defined in 2 CFR 200.1 and excludes equipment, capital expenditures, patient care charges, rent, tuition remission, and the portion of each subaward in excess of $25,000. The de minimis rate election must be applied consistently to all federal awards for which indirect costs are claimed and cannot be combined with a partial negotiated rate. The election is made on a per-award basis in the grant application budget. Organizations that have a cognizant federal agency must use their negotiated rate — they cannot elect the de minimis rate.
What are unallowable costs under federal grant cost principles?
Unallowable costs under 2 CFR 200 Subpart E — the Uniform Guidance cost principles — include alcoholic beverages (§200.423), entertainment costs (§200.438), fines and penalties (§200.441), fundraising costs (§200.442), lobbying (§200.450), and contributions to a contingency reserve without prior written approval from the federal awarding agency (§200.433). Unallowable costs that appear in a grant budget are not merely scored down — they cause the application to be disqualified in many federal programs, and if they are funded and not identified until a Single Audit, they are disallowed and must be returned. The OMB Compliance Supplement, published annually, identifies the specific cost principles applicable to each federal program cluster.
What is a logic model and what must it contain for a federal grant application?
A logic model is a one-page diagram or table that maps the causal chain from your program inputs to your intended outcomes. Most federal grant applications require a logic model that shows: inputs (staff, funding, partner organizations, facilities), activities (what your program does), outputs (countable products of activities — number of participants served, workshops held, materials distributed), short-term outcomes (changes in knowledge, skills, or behavior within 6–12 months of participation), and long-term outcomes (sustained changes in condition or status at 1–3 years). The logic model must map directly to your evaluation measures: every outcome in the logic model must have a corresponding performance indicator, data source, and target in your evaluation plan. Federal reviewers use the logic model to assess whether the proposed activities are plausible given the budget, and whether the evaluation plan will actually measure the outcomes the program claims to produce.