What Is Grafial
Most graph tools assume relationships are known: either an edge exists or it does not. Grafial starts from a more realistic premise. In many important problems, connections are partial, noisy, contested, or still emerging. Grafial is our open-source language for describing and reasoning about that uncertainty directly.
Instead of forcing the world into brittle binary structure, Grafial lets you work with probabilistic nodes, edges, attributes, evidence, and update rules as first-class concepts. You describe what is believed, what has been observed, and how beliefs should evolve. Grafial handles the machinery required to keep that reasoning coherent.
Why It Matters
A surprising number of real-world decisions depend on uncertain relationships.
In games, that might mean reasoning about likely social influence between players, trust and abuse signals across accounts, emerging guild or party structure, item or content affinities, or the probability that one event in a player journey meaningfully leads to another. In all of those cases, collapsing uncertainty too early can make downstream decisions brittle. Treating uncertainty explicitly leads to models and workflows that are easier to inspect, stress-test, and improve.
Grafial is built for that middle ground between rigid graph logic and hand-rolled probabilistic code.
What Makes Grafial Different
Grafial is not a graph library with a few uncertainty features added on top. It is a domain-specific language built around uncertain relational reasoning from the beginning.
With Grafial, you can declaratively define:
- Schemas, for the entities and relationships in your domain
- Priors, for what is believed before evidence arrives
- Evidence, for what observations should update those beliefs
- Rules and flows, for transforming, propagating, and querying uncertain structure over time
That means less handwritten probabilistic plumbing, and a much clearer representation of the actual reasoning problem.
A Grafial example (Bayesian A/B testing) looks like this:
// Example: Bayesian A/B testing
// Classic Bayesian comparison of conversion rates between variants.
// Demonstrates decision-making under uncertainty with practical thresholds.
schema ABTest {
node Variant {
conversion_rate: Real
sample_size: Real
}
edge OUTPERFORMS { }
}
belief_model TestBeliefs on ABTest {
node Variant {
// Prior centered near 10% conversion with moderate uncertainty.
conversion_rate ~ Gaussian(
mean=0.1,
precision=10.0 // tau = 10 -> sigma^2 = 0.1
)
sample_size ~ Gaussian(mean=1000.0, precision=0.01)
}
edge OUTPERFORMS {
// Prior: no preference before data.
exist ~ Bernoulli(prior=0.5, weight=2.0)
}
}
evidence VariantAData on TestBeliefs {
// Variant A: 120 conversions / 1000 impressions -> observed 0.12.
Variant { "A" { conversion_rate: 0.12, sample_size: 1000.0 } }
}
evidence VariantBData on TestBeliefs {
// Variant B: 150 conversions / 1000 impressions -> observed 0.15.
Variant { "B" { conversion_rate: 0.15, sample_size: 1000.0 } }
}
rule DetermineWinner on TestBeliefs {
pattern
(A:Variant)-[dummy_a:OUTPERFORMS]->(A:Variant),
(B:Variant)-[dummy_b:OUTPERFORMS]->(B:Variant)
where
A != B
and E[B.conversion_rate] > E[A.conversion_rate]
and E[B.conversion_rate] - E[A.conversion_rate] > 0.02 // Practical lift threshold.
action {
// In production this could trigger deployment gates or follow-up tests.
non_bayesian_nudge B.conversion_rate to E[B.conversion_rate] * 1.01 variance=preserve
}
mode: for_each
}
flow ABTestAnalysis on TestBeliefs {
graph a = from_evidence VariantAData
graph b = from_evidence VariantBData
// Posterior intuition (illustrative):
// A ~ 0.102, B ~ 0.105 after prior pooling.
// Delta ~ 0.003: directional, but not practically large.
graph with_winner = b |> apply_rule DetermineWinner
metric mean_A = nodes(Variant)
|> where(E[node.sample_size] > 0.0)
|> avg(by=E[node.conversion_rate])
metric good_variants = nodes(Variant)
|> where(E[node.conversion_rate] > 0.12)
|> count()
export with_winner as "winner"
}
rule DecideWithLoss on TestBeliefs {
pattern
(A:Variant)-[dummy:OUTPERFORMS]->(A:Variant)
where
E[A.conversion_rate] > 0.11
action {
non_bayesian_nudge A.conversion_rate to E[A.conversion_rate] * 1.05 variance=preserve
}
mode: for_each
}
// Optional calibration and cleanup rules:
rule CalibrateLowRates on TestBeliefs {
for (V:Variant) where E[V.conversion_rate] < 0.08 => {
V.conversion_rate ~= 0.09 precision=0.5
}
}
rule CleanupWeakEdges on TestBeliefs {
pattern (X:Variant)-[xy:OUTPERFORMS]->(Y:Variant)
where prob(xy) < 0.05 => {
delete xy confidence=high
}
}
Why We’re Excited About It
Grafial reflects a broader Iridae view that uncertainty should not be hidden behind a final score or collapsed into a fixed graph too early. It should be inspectable, composable, and usable throughout the reasoning process.
For researchers and ML engineers, Grafial is interesting because it turns uncertain relational inference into something programmable. For technical teams, it offers a clearer way to model, update, and reason over ambiguous structure without rebuilding the same logic from scratch each time.
Where It Has an Edge
Grafial is especially useful where relationships are only partially observed and where updates need to remain explainable.
That includes game-industry problems such as:
- Player influence and social propagation, where effects spread through uncertain and shifting connection structure
- Trust, fraud, and abuse reasoning, where links between entities often begin as probabilistic signals rather than hard facts
- Content and economy modeling, where affinities, substitutions, and interaction effects are rarely binary
- Experimentation and decision graphs, where evidence accumulates over time and should update beliefs consistently
- Quest, progression, or world-state dependencies, where the model needs to reason over uncertain causal or relational structure
In those settings, the ability to declare uncertain structure directly can be more valuable than forcing everything through a fixed graph pipeline.
Open Source
Grafial is open source, which means teams can inspect the language, experiment with it directly, and see how uncertainty-aware graph reasoning can be expressed in practice.
How It Fits Into Iridae
Inside Iridae, Grafial helps us prototype, validate, and operationalize uncertain relational reasoning more quickly. It gives us a shared language for describing belief, evidence, and relational transformation without rebuilding the same uncertainty machinery over and over.
For customers, that means more transparent reasoning over ambiguous structure. For builders, it means a serious tool for working with uncertain graphs in a way that is declarative, inspectable, and composable.