Benchmark Results

Validated Computational Research

Benchmark Results & Computational Validation.

Independent ARM64 benchmark observation framework focused on reproducible runtime validation, cross-system verification and structured computational archive discipline.
Public Release
10B
Open validated benchmark datasets publicly available for independent verification.
Premium Archive
100B+
Extended structured validation archives prepared for long-term computational research.
VALIDATION RESULTS

Public benchmark comparison overview.

Public release covers validated benchmark outputs up to 10B. Extended 100B validation archives are reserved for premium datasets.
LIMIT MAYAN_ALFA MR ONLY PRIMESIEVE π(N) STATUS
10M 1.684354 s 3.604905 s 1.000000 s 664579 OK
100M 0.360000 s 23.874431 s 1.000000 s 5761455 OK
1B 1.793206 s 244.145627 s 9.000000 s 50847534 OK
10B 18.806417 s NOT RUN 94.000000 s 455052511 OK
100B+ Extended Validation Archive Premium MR Layer Reference Verification 100B+ Dataset PREMIUM QA
Measured values represent internal benchmark execution time under specific ARM64 testing conditions and should not be interpreted as universal performance metrics. Results depend on hardware configuration, compiler optimization, execution methodology and dataset structure. All benchmark outputs were cross-validated against independent computational methods and archived within the MAYAN_ALFA validation framework.
DATASET FLOW

Structured validation pipeline for public and premium computational archives.

Benchmark outputs move through a layered validation workflow including runtime observation, independent verification, QA comparison and long-term archival preparation.
STEP 01
Computation
Execution outputs are generated under controlled ARM64 benchmark conditions with deterministic validation workflows and structured logging layers.
STEP 02
Cross Validation
Outputs are compared against independent computational systems including MR and primesieve validation frameworks.
STEP 03
Public Release
Validated datasets up to 10B are published as transparent public research layers with QA summaries and archive manifests.
STEP 04
Premium Archive
Extended 100B+ validation archives are maintained separately as premium computational research datasets and long-term archival packages.
RESEARCH PRINCIPLES

Observation-first methodology for structured computational research.

MAYAN_ALFA is designed as a deterministic computational observation framework focused on validation, reproducibility, benchmark transparency and long-term archive integrity.
01
Measured Observation
All outputs are treated as measured computational observations generated under controlled execution conditions.
02
Cross-System Validation
Benchmark datasets are validated against independent computational systems and archived with QA summaries.
03
Long-Term Archive Discipline
Structured datasets, benchmark manifests and validation reports are preserved as reproducible research layers.
DATASET STRUCTURE

Public benchmark layers and premium validation archives.

MAYAN_ALFA datasets are divided into transparent public benchmark releases and extended premium archival validation layers.
PUBLIC RELEASE
Open Benchmark Layer
Public datasets provide transparent benchmark visibility and independent reproducibility for core runtime observations.
Validated datasets up to 10B
Cross-system QA summaries
Public benchmark manifests
GitHub + Zenodo release layers
View Public Datasets
PREMIUM ARCHIVE
100B+ Validation Layer
Premium computational archives extend benchmark validation into large-scale structured dataset environments.
Extended 100B+ validation archives
Long-term reproducible archive layers
Premium QA and comparison reports
Research-oriented dataset packaging
Request Premium Access
VISUAL SCALING

Runtime comparison across validated benchmark layers.

Relative runtime scaling comparison between MAYAN_ALFA, MR-only execution and primesieve reference validation across the public validated core.
3000s
1000s
250s
100s
10s
1s
10M
1.684s / 3.605s / 1.000s
100M
0.360s / 23.874s / 1.000s
1B
1.793s / 244.146s / 9.000s
10B
18.806s /NOTRUN/ 94.000s
MAYAN_ALFA
MR ONLY
PRIMESIEVE
Public validated core: 10M–10B. MAYAN 100M uses the corrected public wall-clock value 0.360000 s. MR 10B was not run and is therefore displayed as disabled, not estimated. 100B+ belongs to the premium archive layer and is not disclosed in the public visualization.
RUNTIME CURVES

Validated log-scale runtime curves across public benchmark layers.

Visual representation of public core runtime progression for MAYAN_ALFA, MR-only execution and primesieve validation layers. Values are based on the THROUGHPUT + SCALING REPORT for P0_7_MAYAN_ALFA_MR_PRIMESIEVE.
1000s 100s 10s 1s 0.1s 10M 100M 1B 10B 1.684s 0.360s 1.793s 18.806s 3.605s 23.874s 244.146s MR 10B: NOT RUN 1.000s 1.000s 9.000s 94.000s
MAYAN_ALFA
MR ONLY
PRIMESIEVE
Public validated core: 10M–10B. MR 10B is marked as NOT RUN and is therefore not connected in the curve. 100B is treated as an extended premium archive layer, not as part of the public core curve.
RESEARCH ACCESS

Access validated benchmark releases and extended premium archives.

Public MAYAN_ALFA releases provide transparent benchmark validation up to 10B. Extended 100B+ archives, premium QA layers and long-term computational datasets are prepared as premium research packages.
VISUAL ANALYTICS

Benchmark visualization and runtime scaling analytics.

Structured graphical analysis of MAYAN_ALFA benchmark behavior across validated computational layers and independent comparison frameworks.
01 Runtime Log Scale 10M 10B
RUNTIME SCALING
Log-scale runtime comparison.
Comparative execution scaling between MAYAN_ALFA, MR-only execution and primesieve across validated public benchmark layers.
10M → 1.684 s 100M → 0.360 s 1B → 1.793 s 10B → 18.806 s
03 Mayan Throughput Numbers Per Second Bez 100 Scaled
THROUGHPUT
Numbers processed per second.
Observed throughput scaling behavior across MAYAN_ALFA validation layers and structured dataset ranges.
100M → 278M/sec 1B → 557M/sec 10B → 531M/sec
05 Mayan Speedup Vs Mr 10M 10B
SPEEDUP
MAYAN_ALFA vs MR scaling ratio.
Relative acceleration behavior of MAYAN_ALFA compared to MR-only execution across benchmark layers.
10M → 2.1× 100M → 66× 1B → 136×
06 Mayan Ratio Vs Primesieve 10M 10B
REFERENCE COMPARISON
Ratio comparison vs primesieve.
Relative runtime comparison between MAYAN_ALFA and primesieve validation layers under public benchmark conditions.
10M → 0.59× 100M → 2.78× 1B → 5.02× 10B → 5.00×
QA INTEGRITY

Validation summary for the public benchmark core.

The public benchmark layer is restricted to the validated 10M–10B core. Extended 100B+ archive information remains separated as a premium research layer.
4
Public Limits
10M, 100M, 1B and 10B are presented as the public validated benchmark core.
0
Mismatches
Public benchmark outputs were validated against independent reference layers with no mismatches.
3
Engines
MAYAN_ALFA, MR-only and primesieve are used as comparison and validation frameworks.
100B+
Premium Only
Extended archive layers are not disclosed in public visualizations and remain premium-only.
FRAMEWORK STRUCTURE

Independent validation architecture across multiple computational layers.

MAYAN_ALFA benchmark observations are organized into separate execution, comparison and archival layers in order to maintain reproducibility, validation integrity and structured long-term dataset discipline.
01
Execution Layer
MAYAN_ALFA runtime core.
Primary runtime observation framework focused on structured ARM64 execution behavior, throughput scaling and deterministic benchmark progression across validated public layers.
02
Reference Layer
Cross-system verification.
Independent comparison using MR-only execution and primesieve reference validation to confirm consistency, runtime integrity and mismatch-free public benchmark outputs.
03
Archive Layer
Premium validation archives.
Extended 100B+ validation archives are separated from the public benchmark layer and preserved as long-term premium research datasets with controlled disclosure policy.
VALIDATION METHODOLOGY

Structured benchmark methodology and verification workflow.

Public benchmark outputs are generated through a layered execution and validation workflow designed to maintain deterministic reproducibility, independent comparison integrity and long-term archival consistency.
01
Execution
Primary runtime observation.
MAYAN_ALFA executes structured runtime observation across ARM64 benchmark ranges while preserving deterministic execution methodology and reproducible dataset progression.
02
Cross Validation
Independent comparison layer.
Benchmark outputs are validated against MR-only execution and primesieve reference layers to verify runtime consistency and mismatch-free public benchmark integrity.
03
Archival Policy
Public vs premium separation.
The validated 10M–10B core forms the public benchmark layer, while extended 100B+ archives remain separated as controlled premium validation datasets.
04
Integrity
Long-term reproducibility discipline.
Benchmark reports, CSV outputs, runtime summaries and validation archives are regenerated after each clean benchmark cycle to preserve methodological consistency.
PUBLICATION LAYERS

Public repositories and long-term research archives.

MAYAN_ALFA public benchmark layers are distributed through transparent repository and archival publication systems designed for reproducibility and long-term preservation.
GITHUB
Public benchmark repository.
Source releases, benchmark manifests, public runtime reports and validation summaries are published through the GitHub repository layer.
Public validated benchmark core
CSV and runtime summaries
Release manifests and QA reports
Transparent revision history
Open GitHub
ZENODO
Long-term archival publication.
Structured benchmark archives and validated release layers are preserved through Zenodo archival infrastructure and DOI-based publication systems.
DOI archival publication
Long-term preservation layers
Research-grade release snapshots
Academic citation support
Open Zenodo
NEXT STEP

Access the validated public benchmark core.

Explore the MAYAN_ALFA public validation layer, benchmark summaries and archive-ready research outputs. Extended 100B+ layers remain separated as controlled premium research archives.