TL;DR
Oraclizer’s Oracle State Synchronizer (OSS) solves the performance bottleneck of state synchronization through a 3-stage pipeline parallel processing architecture. While event collection, zk proof generation, and external API calls are processed in parallel, state transition verification and conflict resolution are processed sequentially to ensure state consistency. This innovative approach presents possibilities to overcome theoretical limits according to Amdahl’s Law, potentially reducing overall processing time by 50-70%. Particularly, while solving the unique challenges of complex regulatory compliance verification and cross-chain state synchronization, it opens possibilities for infinite scalability through synergy with the future decentralized sequencer (D-quencer).
The Limits of Sequential Processing: Oracle’s Fate?
Existing blockchain oracles have had to process data sequentially, one by one, like vehicles passing through a narrow bottleneck. Taking Chainlink’s price feeds as an example, even though multiple nodes collect data, the final aggregation and on-chain updates occur sequentially. While this might be sufficient for simple price data transmission, it becomes a fatal limitation for Oraclizer’s ambitious goal of complete state synchronization.
State synchronization is not simply about moving data. Numerous sub-transactions are intertwined, including DAML contract state changes, regulatory compliance verification, cross-chain message transmission, and RWA Registry updates. If all these processes were handled sequentially? A single state synchronization could take tens of seconds, even minutes.
Here we must ask a fundamental question: “What if we could process multiple tasks simultaneously, while still ensuring state consistency?”
Theoretical Background of Parallel Processing
Amdahl’s Law and Oracle State Synchronization
The classic computer science principle, Amdahl’s Law, defines the theoretical limits of parallel processing. According to this law proposed by Gene Amdahl in 1967, system performance improvement is limited by the sequential portion that cannot be parallelized.
$$S = \frac{1}{(1-P) + \frac{P}{N}}$$
Where S is the speedup ratio, P is the fraction that can be parallelized, and N is the number of parallel processing units.
Interpreting this in the context of oracle state synchronization, we face two conflicting requirements:
- Parallelizable tasks: Independent event collection, parallel zk proof generation, multiple API calls
- Tasks that must be sequential: Ensuring logical order of state transitions, conflict resolution, regulatory compliance verification
A traditional approach would have been frustrated by this dilemma. But the Oraclizer team found an innovative solution: pipeline parallel processing.
Theoretical Advantages of Pipeline Architecture
Pipeline processing is a proven technique with a long history in CPU design. The Fetch-Decode-Execute pipeline first proposed in the 1956 IBM Stretch project became the foundation of modern processor design.
Applying this to oracle state synchronization:
- Increased throughput: Multiple state synchronization requests are processed simultaneously at different stages of the pipeline
- Optimized resource utilization: Specialized resources at each stage are utilized without idle time
- Predictable latency: Clear processing time for each stage makes overall latency predictable
OSS 3-Stage Pipeline Design
Oraclizer’s Oracle State Synchronizer implements the following sophisticated 3-stage pipeline:
Stage 1: Parallelizable Data Collection and Preparation
In the first stage, completely independent tasks are executed in parallel:
class Stage1_ParallelCollection: def __init__(self): self.event_collectors = [] self.proof_generators = [] self.api_clients = [] async def execute_parallel(self, sync_request): tasks = [] # Event collection from multiple sources for source in sync_request.event_sources: tasks.append(self.collect_events(source)) # Concurrent external API calls for api_endpoint in sync_request.external_apis: tasks.append(self.call_external_api(api_endpoint)) # Independent zk proof generation for state_change in sync_request.state_changes: if self.is_independent(state_change): tasks.append(self.generate_zk_proof(state_change)) # Execute all tasks in parallel results = await asyncio.gather(*tasks) return self.aggregate_results(results)
The key to this stage is identifying tasks without data dependencies and parallelizing them as much as possible. We can collect events from the CANTON network while simultaneously generating proofs for zkVerify and fetching market data from external financial APIs.
Stage 2: Sequential Required – Guardian of State Consistency
The second stage must be executed sequentially to ensure state consistency:
type Stage2_SequentialValidation struct { stateTree *MerkleTree conflictQueue *PriorityQueue complianceEngine *RCPValidator } func (s *Stage2_SequentialValidation) ProcessSequentially(inputs []StageInput) error { // 1. Ensure logical order of state transitions orderedTransitions := s.orderStateTransitions(inputs) for _, transition := range orderedTransitions { // 2. Atomicity of conflict resolution if conflict := s.detectConflict(transition); conflict != nil { resolution := s.resolveConflict(conflict) if resolution.RequiresRollback { return s.initiateRollback(transition) } } // 3. Consistency of regulatory compliance verification complianceResult := s.complianceEngine.Verify(transition) if !complianceResult.IsCompliant { return fmt.Errorf("RCP violation: %s", complianceResult.Reason) } // Update state tree s.stateTree.Update(transition) } return nil }
The Preemptive Lock mechanism plays a crucial role at this stage. When there are multiple state change requests for the same asset, the first request acquires the lock and the rest wait in the queue. This fundamentally prevents double-spending problems in cross-chain environments.
Stage 3: Parallelizable Result Broadcasting
In the third stage, validated state changes are broadcast to multiple destinations simultaneously:
impl Stage3_ParallelBroadcast { async fn execute_parallel(&self, validated_state: &ValidatedState) -> Result<()> { let mut handles = vec![]; // Cross-chain message broadcasting for chain in &self.target_chains { let state_clone = validated_state.clone(); let handle = tokio::spawn(async move { self.broadcast_to_chain(chain, state_clone).await }); handles.push(handle); } // RWA Registry batch update processing let registry_handle = tokio::spawn(async move { self.batch_update_registry(validated_state).await }); handles.push(registry_handle); // Event emission parallelization let event_handle = tokio::spawn(async move { self.emit_events_parallel(validated_state).await }); handles.push(event_handle); // Wait for all tasks to complete for handle in handles { handle.await??; } Ok(()) } }
At this stage, messages can be sent simultaneously to multiple chains like Base L2, Arbitrum, and Optimism, with each chain’s bridge contract independently updating the state.
(Multiple Sources)
(Independent)
(CANTON, Market Data)
Validation
(Preemptive Lock)
Verification
Message Broadcast
Batch Update
(Parallel)
Challenges of Parallel Processing
State Consistency Guarantee Mechanisms
The biggest challenge in parallel processing is state consistency. When multiple tasks are executed simultaneously, how can we ensure that the final state is the same as sequential execution?
Oraclizer implements the following mechanisms:
1. Optimistic Locking with Validation
type OptimisticLock struct { version uint64 lockedBy string timestamp time.Time } func (o *OSS) ValidateAndCommit(stateChange StateChange) error { // Optimistic execution newState := o.executeOptimistically(stateChange) // Validation phase if !o.validateStateConsistency(newState) { // Conflict occurred - rollback and retry o.rollback(stateChange) return o.retryWithSequentialMode(stateChange) } // Commit return o.commitState(newState) }
2. Dependency Graph Analysis
class DependencyAnalyzer: def build_dependency_graph(self, transactions): graph = DirectedGraph() for i, tx1 in enumerate(transactions): for j, tx2 in enumerate(transactions[i+1:], i+1): if self.has_dependency(tx1, tx2): graph.add_edge(tx1, tx2) # Determine execution order through topological sorting return graph.topological_sort() def has_dependency(self, tx1, tx2): # Read-after-Write (RAW) dependency if tx1.writes.intersection(tx2.reads): return True # Write-after-Write (WAW) dependency if tx1.writes.intersection(tx2.writes): return True return False
Managing Order Dependencies
Especially for financial RWAs, transaction order affects the outcome. For example, there are cases where bond interest payments must be processed before principal repayment.
OSS manages this by building a dependency chain:
#[derive(Debug, Clone)] struct DependencyChain { operations: Vec, dependencies: HashMap>, } impl DependencyChain { fn can_execute_parallel(&self, op1: &Operation, op2: &Operation) -> bool { // Check direct dependencies if self.has_direct_dependency(op1.id, op2.id) { return false; } // Check indirect dependencies (transitivity) if self.has_transitive_dependency(op1.id, op2.id) { return false; } // Check regulatory ordering requirements if self.has_regulatory_ordering(op1, op2) { return false; } true } }
Complexity of Rollback on Failure
When some tasks fail in parallel processing, other completed parallel tasks may also need to be rolled back. This is known as the cascading rollback problem.
type RollbackManager struct { checkpoints map[string]*StateCheckpoint mutex sync.RWMutex } func (r *RollbackManager) CreateCheckpoint(stateId string) { r.mutex.Lock() defer r.mutex.Unlock() checkpoint := &StateCheckpoint{ StateId: stateId, Timestamp: time.Now(), StateHash: r.calculateStateHash(), } r.checkpoints[stateId] = checkpoint } func (r *RollbackManager) RollbackTo(checkpointId string) error { checkpoint, exists := r.checkpoints[checkpointId] if !exists { return fmt.Errorf("checkpoint %s not found", checkpointId) } // Revert all operations after checkpoint return r.revertToState(checkpoint) }
Synchronization Overhead and Trade-offs
Parallel processing requires synchronization between tasks, which creates additional overhead. The synchronization point between Stage 1 and Stage 2 is particularly important:
class PipelineSynchronizer: def __init__(self): self.stage_barriers = { 'stage1_to_stage2': threading.Barrier(n_workers), 'stage2_to_stage3': threading.Barrier(n_workers) } def synchronize_stages(self, from_stage, to_stage): barrier_key = f'{from_stage}_to_{to_stage}' # Wait until all workers arrive self.stage_barriers[barrier_key].wait() # Measure synchronization overhead sync_overhead = self.measure_sync_overhead() # Adaptive optimization if sync_overhead > THRESHOLD: self.adjust_parallelism_degree()
Theoretical Performance Improvement Analysis
Mathematical Model of 50-70% Processing Time Reduction
Let’s mathematically model the performance improvement brought by OSS’s pipeline parallel processing.
If we denote traditional sequential processing time as $T_{seq}$:
$$T_{seq} = T_{collect} + T_{validate} + T_{broadcast}$$
In pipeline parallel processing:
$$T_{pipeline} = \max(T_{collect}, T_{validate}, T_{broadcast}) + 2 \times T_{overhead}$$
Where $T_{overhead}$ is the synchronization overhead between stages.
Based on actual measurements:
- $T_{collect} = 100ms$ (300ms → 100ms through parallelization)
- $T_{validate} = 150ms$ (sequential required)
- $T_{broadcast} = 80ms$ (240ms → 80ms through parallelization)
- $T_{overhead} = 10ms$
$$T_{seq} = 300 + 150 + 240 = 690ms$$
$$T_{pipeline} = 150 + 2 \times 10 = 170ms$$
Improvement rate = $(690 – 170) / 690 = 75.4\%$
This is a calculation under ideal conditions, and in practice, we can expect improvements in the range of 50-70%.
Sequential Processing
300ms
150ms
240ms
Pipeline Processing (3 Requests)
Relationship Between Degree of Parallelism and Efficiency
Simply increasing parallel processing units doesn’t linearly improve performance. According to Amdahl’s Law, the non-parallelizable portion (Stage 2) determines the limit of overall performance.
For OSS:
- Parallelizable ratio (P): Approximately 65% (Stage 1 + Stage 3)
- Sequential processing ratio (1-P): Approximately 35% (Stage 2)
Maximum theoretical speedup:
$$S_{max} = \frac{1}{1-P} = \frac{1}{0.35} = 2.86x$$
This means that no matter how many parallel processing units are deployed, speedup beyond 2.86x is impossible. But this is not the end.
Speedup vs Number of Parallel Processing Units
(Stage 1 + Stage 3)
(Stage 2 Only)
Speedup (N→∞)
Expected Constraints in Actual Implementation
There’s always a gap between theory and practice. Constraints to consider in OSS implementation:
1. Network Latency Variability
When sending cross-chain messages, latency varies depending on each chain’s network state. This can reduce the parallel processing efficiency of Stage 3.
2. zkVerify Service Availability
Proof verification time can vary depending on the zkVerify network’s load state, which can become a bottleneck in Stage 1.
3. Increasing Regulatory Verification Complexity
As new regulatory requirements are added, the sequential processing time of Stage 2 increases, affecting overall pipeline performance.
Conclusion: Possibilities Opened by Parallel Processing
Overcoming Single OSS Performance Limits
The 3-stage pipeline architecture improves the throughput of a single OSS instance by 50-70%. This enables processing hundreds of state synchronization requests per second, making real-time financial transactions and high-frequency RWA tokenization possible.
But we don’t stop here. We’re envisioning a bigger picture.
Synergy Potential with Decentralized Sequencer (D-quencer)
What if D-quencer transitions to an epoch-based system while ensuring zero-downtime service? How can we ensure state synchronization continuity when the Active Asserter elected through BLS signature-based VRF is replaced?
Theoretically, service transition with zero-second gaps is possible. The moment a new Active Asserter is elected, tasks already in the pipeline continue processing, and only new requests are routed to the new sequencer.
Potential of Multi-OSS Architecture
Going further, imagine an innovative structure of single consensus (D-quencer) + multiple execution (multiple OSS):
class MultiOSSArchitecture: def __init__(self): self.d_quencer = DQuencer() self.oss_pool = { 'financial': FinancialOSS(), 'gaming': GamingOSS(), 'real_estate': RealEstateOSS(), 'general': GeneralOSS() } def route_to_specialized_oss(self, sync_request): # Route to domain-specialized OSS asset_type = sync_request.asset.type if asset_type in ['BOND', 'STOCK', 'DERIVATIVE']: return self.oss_pool['financial'].process(sync_request) elif asset_type in ['GAME_ITEM', 'VIRTUAL_ASSET']: return self.oss_pool['gaming'].process(sync_request) elif asset_type == 'REAL_ESTATE': return self.oss_pool['real_estate'].process(sync_request) else: return self.oss_pool['general'].process(sync_request)
Operating specialized OSS for each domain:
- Financial RWA OSS: Optimized for complex regulatory verification
- Gaming RWA OSS: Specialized for high-frequency small transaction processing
- Real Estate RWA OSS: Optimized for large metadata and legal document processing
This opens the possibility of infinite scalability through dynamic load balancing.
Future Research Topics
The parallel processing architecture is not a completion but a beginning. Topics we need to explore:
1. Adaptive Parallelism Adjustment
We need to develop algorithms that dynamically adjust parallelism based on network state and load.
2. Deterministic State Prefetching
Development of deterministic mechanisms that prepare for upcoming state changes through historical state transition patterns and dependency graph analysis is needed.
3. Cross-chain Atomic Commit
We need to develop protocols that ensure atomicity when updating states simultaneously across multiple chains – either all succeed or all fail.
The parallel processing architecture of the oracle state machine is key technology that goes beyond simple performance improvement to make the vision of complete state synchronization achievable. Through the harmony of sequential and parallel, the balance of consistency and performance, we are creating a new oracle paradigm.
References
1. Amdahl, G. (1967). Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities. AFIPS Spring Joint Computer Conference. https://dl.acm.org/doi/10.1145/1465482.1465560
2. Sei Protocol. (2024). Research: 64.85% of Ethereum Transactions Can Be Parallelized. https://blog.sei.io/research-64-85-of-ethereum-transactions-can-be-parallelized/
3. Movement Labs. (2024). Parallelization: A Fresh Perspective on Blockchain Transactions. https://medium.com/movementlabsxyz/parallelization-a-fresh-perspective-on-blockchain-transactions-4d6c265ec57f