integrations

GitHub

Use brin in GitHub Actions to check contributor trust and scan pull requests for security threats

Integrate brin into your GitHub CI pipeline with Actions workflows that automatically check contributor trust and scan pull requests for security signals.

##Contributor checks

Add a workflow that queries the brin contributor API for every PR author. It labels PRs as contributor:verified or contributor:flagged and posts a detailed comment when review is recommended.

###What it does

When a pull request is opened, reopened, or updated, the workflow:

  1. Queries the brin contributor API for the PR author's trust score
  2. Applies a label — contributor:verified (safe) or contributor:flagged (needs review)
  3. Posts a comment with threat signals, dimension breakdown, and a link to the full profile when flagged
  4. Cleans up the comment automatically when a previously-flagged contributor is re-evaluated as safe

###Workflow

Create .github/workflows/contributor-check.yml in your repository:

YAML
name: Contributor Trust Check
 
on:
  pull_request_target:
    types: [opened, reopened, synchronize]
 
permissions:
  contents: read
  pull-requests: write
 
jobs:
  check:
    name: Scan PR author
    runs-on: ubuntu-24.04
    concurrency:
      group: brin-check-${{ github.event.pull_request.number }}
      cancel-in-progress: true
    steps:
      - name: Query Brin API
        id: brin
        run: |
          AUTHOR="${{ github.event.pull_request.user.login }}"
          RESPONSE=$(curl -sf "https://api.brin.sh/contributor/${AUTHOR}?details=true&mode=full" || echo '{}')
          echo "response<<BRIN_EOF" >> "$GITHUB_OUTPUT"
          echo "$RESPONSE" >> "$GITHUB_OUTPUT"
          echo "BRIN_EOF" >> "$GITHUB_OUTPUT"
 
      - name: Apply label and comment
        uses: actions/github-script@v7
        env:
          BRIN_RESPONSE: ${{ steps.brin.outputs.response }}
        with:
          script: |
            const marker = "<!-- brin-check -->";
            const pr = context.payload.pull_request;
            let data;
            try {
              data = JSON.parse(process.env.BRIN_RESPONSE);
            } catch {
              core.warning("Failed to parse Brin API response");
              return;
            }
 
            if (!data.score && data.score !== 0) {
              core.warning("Brin API returned no score");
              return;
            }
 
            const score = data.score;
            const verdict = data.verdict ?? "unknown";
            const confidence = data.confidence ?? "unknown";
            const isSafe = verdict === "safe";
 
            const labels = [
              { name: "contributor:verified", color: "0969da", description: "Contributor passed trust analysis." },
              { name: "contributor:flagged",  color: "e16f24", description: "Contributor flagged for review by trust analysis." },
            ];
 
            for (const label of labels) {
              try {
                const { data: existing } = await github.rest.issues.getLabel({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  name: label.name,
                });
                if (existing.color !== label.color || (existing.description ?? "") !== label.description) {
                  await github.rest.issues.updateLabel({
                    owner: context.repo.owner,
                    repo: context.repo.repo,
                    name: label.name,
                    color: label.color,
                    description: label.description,
                  });
                }
              } catch (error) {
                if (error.status !== 404) throw error;
                await github.rest.issues.createLabel({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  name: label.name,
                  color: label.color,
                  description: label.description,
                });
              }
            }
 
            const nextLabel = isSafe ? "contributor:verified" : "contributor:flagged";
            const labelNames = labels.map((l) => l.name);
 
            const { data: currentLabels } = await github.rest.issues.listLabelsOnIssue({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: pr.number,
            });
 
            const preserved = currentLabels
              .map((l) => l.name)
              .filter((name) => !labelNames.includes(name));
 
            await github.rest.issues.setLabels({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: pr.number,
              labels: [...preserved, nextLabel],
            });
 
            core.info(`PR #${pr.number}: ${verdict} -> ${nextLabel}`);
 
            if (isSafe) {
              const { data: comments } = await github.rest.issues.listComments({
                owner: context.repo.owner,
                repo: context.repo.repo,
                issue_number: pr.number,
                per_page: 100,
              });
              const existing = comments.find((c) => c.body?.includes(marker));
              if (existing) {
                await github.rest.issues.deleteComment({
                  owner: context.repo.owner,
                  repo: context.repo.repo,
                  comment_id: existing.id,
                });
              }
              return;
            }
 
            const sub = data.sub_scores ?? {};
            const fmt = (v) => (v != null ? String(Math.round(v)) : "\u2014");
 
            const verdictEmoji = {
              caution: "\u26A0\uFE0F",
              suspicious: "\u26D4",
              dangerous: "\uD83D\uDEA8",
            }[verdict] ?? "\u2753";
 
            let body = `${marker}\n`;
            body += `### ${verdictEmoji} Contributor Trust Check \u2014 Review Recommended\n\n`;
            body += `This contributor's profile shows patterns that may warrant additional review. `;
            body += `This is based on their GitHub activity, not the contents of this PR.\n\n`;
            body += `**[${data.name}](https://github.com/${data.name})** \u00b7 Score: **${score}**/100\n\n`;
 
            if (data.threats && data.threats.length > 0) {
              body += `#### Why was this flagged?\n\n`;
              body += `| Signal | Severity | Detail |\n|--------|----------|--------|\n`;
              for (const t of data.threats) {
                body += `| ${t.type} | ${t.severity} | ${t.detail} |\n`;
              }
              body += `\n`;
            }
 
            body += `<details>\n<summary>Dimension breakdown</summary>\n\n`;
            body += `| Dimension | Score | What it measures |\n|-----------|-------|------------------|\n`;
            body += `| Identity | ${fmt(sub.identity)} | Account age, contribution history, GPG keys, org memberships |\n`;
            body += `| Behavior | ${fmt(sub.behavior)} | PR patterns, unsolicited contribution ratio, activity cadence |\n`;
            body += `| Content | ${fmt(sub.content)} | PR body substance, issue linkage, contribution quality |\n`;
            body += `| Graph | ${fmt(sub.graph)} | Cross-repo trust, co-contributor relationships |\n\n`;
            body += `</details>\n\n`;
 
            body += `<sub>Analyzed by [Brin](https://brin.sh) \u00b7 [Full profile](${data.url}?details=true)</sub>\n`;
 
            const { data: comments } = await github.rest.issues.listComments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: pr.number,
              per_page: 100,
            });
 
            const existingComment = comments.find((c) => c.body?.includes(marker));
 
            if (existingComment) {
              await github.rest.issues.updateComment({
                owner: context.repo.owner,
                repo: context.repo.repo,
                comment_id: existingComment.id,
                body,
              });
            } else {
              await github.rest.issues.createComment({
                owner: context.repo.owner,
                repo: context.repo.repo,
                issue_number: pr.number,
                body,
              });
            }

###How it works

The workflow uses pull_request_target so it has write access to add labels and comments, even on PRs from forks. It runs in two steps:

  1. Query the brin API — calls the contributor endpoint with details=true&mode=full to get the full trust profile for the PR author. If the API is unreachable, it falls back to an empty JSON object and exits gracefully.
  2. Label and comment — ensures contributor:verified and contributor:flagged labels exist, applies the appropriate one, and posts a detailed comment on flagged PRs with threat signals and a dimension breakdown. The comment uses an HTML marker to find and update itself on subsequent runs.

The concurrency setting ensures only one check runs per PR at a time. If a new commit is pushed while a check is in progress, the running check is cancelled and replaced.

###Blocking merges on flagged contributors

Add a branch protection rule that requires the "Scan PR author" status check to pass. The workflow always succeeds (it labels rather than fails), so to enforce a merge block you can add a step that exits with a non-zero code when the verdict is not safe:

YAML
      - name: Block if flagged
        if: steps.brin.outputs.response != '{}'
        run: |
          VERDICT=$(echo '${{ steps.brin.outputs.response }}' | jq -r '.verdict // "unknown"')
          if [ "$VERDICT" != "safe" ]; then
            echo "::error::Contributor verdict is $VERDICT — merge blocked"
            exit 1
          fi

##PR security scan

Add a workflow that scans the entire pull request in a single API call through the brin PR API. This analyzes the PR author's identity, the aggregate diff, PR metadata, and runs LLM-powered threat detection — all in one request.

###What it does

When a pull request is opened or updated, the workflow:

  1. Calls the brin PR endpoint with the repository and PR number
  2. Waits for the full 3-tier scan to complete (author identity, diff analysis, LLM review)
  3. Posts a comment when the PR is flagged as suspicious or dangerous
  4. Fails the check if the score is below 30 or the verdict is dangerous

###Workflow

Create .github/workflows/pr-scan.yml in your repository:

YAML
name: PR Security Scan
 
on:
  pull_request_target:
    types: [opened, reopened, synchronize, ready_for_review]
 
permissions:
  contents: read
  pull-requests: write
  issues: write
 
jobs:
  scan:
    name: Scan PR
    runs-on: ubuntu-24.04
    timeout-minutes: 10
    concurrency:
      group: brin-pr-${{ github.event.pull_request.number }}
      cancel-in-progress: true
 
    steps:
      - name: Scan PR with Brin
        id: scan
        shell: bash
        env:
          REPO: ${{ github.repository }}
          PR_NUMBER: ${{ github.event.pull_request.number }}
        run: |
          set -euo pipefail
 
          RESPONSE=$(curl -sfL --max-time 300 \
            "https://api.brin.sh/pr/${REPO}/${PR_NUMBER}?details=true&mode=full&tolerance=conservative" \
            || echo '{}')
 
          SCORE=$(jq -r '.score // empty' <<<"$RESPONSE")
          VERDICT=$(jq -r '.verdict // "unknown"' <<<"$RESPONSE")
          PENDING=$(jq -r '.pending_deep_scan // false' <<<"$RESPONSE")
 
          if [ -z "$SCORE" ] || [ "$PENDING" = "true" ]; then
            echo "status=inconclusive" >> "$GITHUB_OUTPUT"
            echo "should_fail=false" >> "$GITHUB_OUTPUT"
            exit 0
          fi
 
          echo "score=${SCORE}" >> "$GITHUB_OUTPUT"
          echo "verdict=${VERDICT}" >> "$GITHUB_OUTPUT"
 
          if [ "$VERDICT" = "dangerous" ] || [ "$SCORE" -lt 30 ]; then
            echo "status=blocking" >> "$GITHUB_OUTPUT"
            echo "should_fail=true" >> "$GITHUB_OUTPUT"
          elif [ "$VERDICT" = "suspicious" ]; then
            echo "status=review" >> "$GITHUB_OUTPUT"
            echo "should_fail=false" >> "$GITHUB_OUTPUT"
          else
            echo "status=clean" >> "$GITHUB_OUTPUT"
            echo "should_fail=false" >> "$GITHUB_OUTPUT"
          fi
 
          THREATS=$(jq -r '(.threats // [])[] | "- \(.type): \(.detail)"' <<<"$RESPONSE")
          {
            echo "threats<<BRIN_EOF"
            echo "$THREATS"
            echo "BRIN_EOF"
          } >> "$GITHUB_OUTPUT"
 
      - name: Comment on flagged PR
        if: steps.scan.outputs.status == 'blocking' || steps.scan.outputs.status == 'review'
        uses: actions/github-script@v7
        env:
          SCORE: ${{ steps.scan.outputs.score }}
          VERDICT: ${{ steps.scan.outputs.verdict }}
          STATUS: ${{ steps.scan.outputs.status }}
          THREATS: ${{ steps.scan.outputs.threats }}
        with:
          script: |
            const marker = "<!-- brin-pr-scan -->";
            const { owner, repo } = context.repo;
            const issue_number = context.payload.pull_request.number;
 
            const headline = process.env.STATUS === "blocking"
              ? "This PR has findings that should block merge."
              : "This PR has findings that should be reviewed.";
 
            let body = `${marker}\n### Brin PR Security Scan\n\n`;
            body += `${headline}\n\n`;
            body += `- **Score:** ${process.env.SCORE}/100\n`;
            body += `- **Verdict:** ${process.env.VERDICT}\n\n`;
            if (process.env.THREATS) {
              body += `**Findings:**\n${process.env.THREATS}\n\n`;
            }
            body += `<sub>Analyzed by [Brin](https://brin.sh)</sub>`;
 
            const comments = await github.paginate(github.rest.issues.listComments, {
              owner, repo, issue_number, per_page: 100,
            });
            const existing = comments.find((c) => c.body?.includes(marker));
 
            if (existing) {
              await github.rest.issues.updateComment({ owner, repo, comment_id: existing.id, body });
            } else {
              await github.rest.issues.createComment({ owner, repo, issue_number, body });
            }
 
      - name: Delete old comment when clean
        if: steps.scan.outputs.status == 'clean'
        uses: actions/github-script@v7
        with:
          script: |
            const marker = "<!-- brin-pr-scan -->";
            const { owner, repo } = context.repo;
            const comments = await github.paginate(github.rest.issues.listComments, {
              owner, repo,
              issue_number: context.payload.pull_request.number,
              per_page: 100,
            });
            const existing = comments.find((c) => c.body?.includes(marker));
            if (existing) {
              await github.rest.issues.deleteComment({ owner, repo, comment_id: existing.id });
            }
 
      - name: Fail if blocking
        if: steps.scan.outputs.should_fail == 'true'
        run: |
          echo "::error::Brin flagged this PR as dangerous or scoring below 30"
          exit 1

###How it works

The workflow calls the brin PR endpoint (/pr/owner/repo/number) with mode=full, which runs a complete 3-tier analysis:

  • Tier 1 — Author identity: account age, contribution history, org memberships, prior commits to the target repo
  • Tier 2 — Behavior and content: PR review status, diff analysis across all changed files, secret detection, agent config tampering, CI workflow changes, PR description injection patterns
  • Tier 3 — LLM-powered analysis: reads the full diff, PR description, and author profile to detect sophisticated threats (triggered conditionally based on risk signals)

The mode=full parameter makes the API wait for all tiers to complete before responding. The workflow only needs a single API call — no checkout, no commit iteration.

If the PR is flagged:

  • A sticky comment is posted with the score, verdict, and threat details
  • The comment updates itself on subsequent runs instead of creating duplicates
  • The check fails if the score is below 30 or the verdict is dangerous
  • The comment is automatically removed when the PR becomes clean

###Combining with contributor checks

You can run both workflows in the same repository. They operate independently — contributor checks evaluate the PR author's profile, while PR scanning evaluates the actual code changes and PR metadata. Together they provide defense in depth: even a trusted contributor's compromised account would be caught if the PR contains malicious patterns.