Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1869, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 578, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1885, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 597, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1392, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1041, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 999, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1896, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
print_report
bool
log_report
bool
overall
dict
warmup
dict
train
dict
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1324.023808, "max_global_vram": 68702.69952, "max_process_vram": 306368.376832, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.6114296264648438, 0.04522224807739258, 0.04265617752075195, 0.044096969604492185, 0.04459328842163086 ], "count": 5, "total": 0.7879983100891114, "mean": 0.15759966201782227, "p50": 0.04459328842163086, "p90": 0.38494667510986336, "p95": 0.49818815078735346, "p99": 0.5887813313293457, "stdev": 0.22691656003063712, "stdev_": 143.9828976314531 }, "throughput": { "unit": "samples/s", "value": 63.45191272598759 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1324.023808, "max_global_vram": 68702.69952, "max_process_vram": 306368.376832, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.6114296264648438, 0.04522224807739258 ], "count": 2, "total": 0.6566518745422364, "mean": 0.3283259372711182, "p50": 0.3283259372711182, "p90": 0.5548088886260987, "p95": 0.5831192575454712, "p99": 0.6057675526809693, "stdev": 0.28310368919372564, "stdev_": 86.22641620907035 }, "throughput": { "unit": "samples/s", "value": 12.183015552307593 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1324.023808, "max_global_vram": 68702.69952, "max_process_vram": 306368.376832, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.04265617752075195, 0.044096969604492185, 0.04459328842163086 ], "count": 3, "total": 0.131346435546875, "mean": 0.04378214518229167, "p50": 0.044096969604492185, "p90": 0.044494024658203124, "p95": 0.04454365653991699, "p99": 0.04458336204528808, "stdev": 0.0008215576559951588, "stdev_": 1.876467342051229 }, "throughput": { "unit": "samples/s", "value": 137.04216581938493 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1324.023808, "max_global_vram": 68702.69952, "max_process_vram": 306368.376832, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.6114296264648438, 0.04522224807739258, 0.04265617752075195, 0.044096969604492185, 0.04459328842163086 ], "count": 5, "total": 0.7879983100891114, "mean": 0.15759966201782227, "p50": 0.04459328842163086, "p90": 0.38494667510986336, "p95": 0.49818815078735346, "p99": 0.5887813313293457, "stdev": 0.22691656003063712, "stdev_": 143.9828976314531 }, "throughput": { "unit": "samples/s", "value": 63.45191272598759 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1324.023808, "max_global_vram": 68702.69952, "max_process_vram": 306368.376832, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.6114296264648438, 0.04522224807739258 ], "count": 2, "total": 0.6566518745422364, "mean": 0.3283259372711182, "p50": 0.3283259372711182, "p90": 0.5548088886260987, "p95": 0.5831192575454712, "p99": 0.6057675526809693, "stdev": 0.28310368919372564, "stdev_": 86.22641620907035 }, "throughput": { "unit": "samples/s", "value": 12.183015552307593 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1324.023808, "max_global_vram": 68702.69952, "max_process_vram": 306368.376832, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.04265617752075195, 0.044096969604492185, 0.04459328842163086 ], "count": 3, "total": 0.131346435546875, "mean": 0.04378214518229167, "p50": 0.044096969604492185, "p90": 0.044494024658203124, "p95": 0.04454365653991699, "p99": 0.04458336204528808, "stdev": 0.0008215576559951588, "stdev_": 1.876467342051229 }, "throughput": { "unit": "samples/s", "value": 137.04216581938493 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1662.713856, "max_global_vram": 68702.69952, "max_process_vram": 327018.92608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.63964453125, 0.03851651382446289, 0.03908770751953125, 0.03855859375, 0.03878098678588867 ], "count": 5, "total": 0.7945883331298829, "mean": 0.15891766662597656, "p50": 0.03878098678588867, "p90": 0.39942180175781256, "p95": 0.5195331665039061, "p99": 0.6156222583007812, "stdev": 0.24036351775308762, "stdev_": 151.25034419162432 }, "throughput": { "unit": "samples/s", "value": 62.92566592697132 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1662.713856, "max_global_vram": 68702.69952, "max_process_vram": 327018.92608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.63964453125, 0.03851651382446289 ], "count": 2, "total": 0.6781610450744628, "mean": 0.3390805225372314, "p50": 0.3390805225372314, "p90": 0.5795317295074462, "p95": 0.6095881303787231, "p99": 0.6336332510757446, "stdev": 0.30056400871276856, "stdev_": 88.64089463580625 }, "throughput": { "unit": "samples/s", "value": 11.796607985823767 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1662.713856, "max_global_vram": 68702.69952, "max_process_vram": 327018.92608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.03908770751953125, 0.03855859375, 0.03878098678588867 ], "count": 3, "total": 0.11642728805541994, "mean": 0.038809096018473314, "p50": 0.03878098678588867, "p90": 0.039026363372802735, "p95": 0.03905703544616699, "p99": 0.0390815731048584, "stdev": 0.00021692232403061523, "stdev_": 0.5589471188078154 }, "throughput": { "unit": "samples/s", "value": 154.60293115675694 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1662.713856, "max_global_vram": 68702.69952, "max_process_vram": 327018.92608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.63964453125, 0.03851651382446289, 0.03908770751953125, 0.03855859375, 0.03878098678588867 ], "count": 5, "total": 0.7945883331298829, "mean": 0.15891766662597656, "p50": 0.03878098678588867, "p90": 0.39942180175781256, "p95": 0.5195331665039061, "p99": 0.6156222583007812, "stdev": 0.24036351775308762, "stdev_": 151.25034419162432 }, "throughput": { "unit": "samples/s", "value": 62.92566592697132 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1662.713856, "max_global_vram": 68702.69952, "max_process_vram": 327018.92608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.63964453125, 0.03851651382446289 ], "count": 2, "total": 0.6781610450744628, "mean": 0.3390805225372314, "p50": 0.3390805225372314, "p90": 0.5795317295074462, "p95": 0.6095881303787231, "p99": 0.6336332510757446, "stdev": 0.30056400871276856, "stdev_": 88.64089463580625 }, "throughput": { "unit": "samples/s", "value": 11.796607985823767 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1662.713856, "max_global_vram": 68702.69952, "max_process_vram": 327018.92608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.03908770751953125, 0.03855859375, 0.03878098678588867 ], "count": 3, "total": 0.11642728805541994, "mean": 0.038809096018473314, "p50": 0.03878098678588867, "p90": 0.039026363372802735, "p95": 0.03905703544616699, "p99": 0.0390815731048584, "stdev": 0.00021692232403061523, "stdev_": 0.5589471188078154 }, "throughput": { "unit": "samples/s", "value": 154.60293115675694 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1332.953088, "max_global_vram": 68702.69952, "max_process_vram": 269274.9312, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.5969871215820313, 0.04346673583984375, 0.042961936950683594, 0.04279489517211914, 0.04712240219116211 ], "count": 5, "total": 0.7733330917358399, "mean": 0.15466661834716797, "p50": 0.04346673583984375, "p90": 0.37704123382568366, "p95": 0.48701417770385735, "p99": 0.5749925328063965, "stdev": 0.22116591879050443, "stdev_": 142.99525078777552 }, "throughput": { "unit": "samples/s", "value": 64.65519261275752 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1332.953088, "max_global_vram": 68702.69952, "max_process_vram": 269274.9312, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.5969871215820313, 0.04346673583984375 ], "count": 2, "total": 0.640453857421875, "mean": 0.3202269287109375, "p50": 0.3202269287109375, "p90": 0.5416350830078125, "p95": 0.5693111022949219, "p99": 0.5914519177246094, "stdev": 0.27676019287109377, "stdev_": 86.42627089020353 }, "throughput": { "unit": "samples/s", "value": 12.491141878985202 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1332.953088, "max_global_vram": 68702.69952, "max_process_vram": 269274.9312, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.042961936950683594, 0.04279489517211914, 0.04712240219116211 ], "count": 3, "total": 0.13287923431396484, "mean": 0.044293078104654944, "p50": 0.042961936950683594, "p90": 0.04629030914306641, "p95": 0.04670635566711426, "p99": 0.04703919288635254, "stdev": 0.0020017961649168446, "stdev_": 4.519433398119305 }, "throughput": { "unit": "samples/s", "value": 135.46134648450715 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1332.953088, "max_global_vram": 68702.69952, "max_process_vram": 269274.9312, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.5969871215820313, 0.04346673583984375, 0.042961936950683594, 0.04279489517211914, 0.04712240219116211 ], "count": 5, "total": 0.7733330917358399, "mean": 0.15466661834716797, "p50": 0.04346673583984375, "p90": 0.37704123382568366, "p95": 0.48701417770385735, "p99": 0.5749925328063965, "stdev": 0.22116591879050443, "stdev_": 142.99525078777552 }, "throughput": { "unit": "samples/s", "value": 64.65519261275752 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1332.953088, "max_global_vram": 68702.69952, "max_process_vram": 269274.9312, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.5969871215820313, 0.04346673583984375 ], "count": 2, "total": 0.640453857421875, "mean": 0.3202269287109375, "p50": 0.3202269287109375, "p90": 0.5416350830078125, "p95": 0.5693111022949219, "p99": 0.5914519177246094, "stdev": 0.27676019287109377, "stdev_": 86.42627089020353 }, "throughput": { "unit": "samples/s", "value": 12.491141878985202 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1332.953088, "max_global_vram": 68702.69952, "max_process_vram": 269274.9312, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.042961936950683594, 0.04279489517211914, 0.04712240219116211 ], "count": 3, "total": 0.13287923431396484, "mean": 0.044293078104654944, "p50": 0.042961936950683594, "p90": 0.04629030914306641, "p95": 0.04670635566711426, "p99": 0.04703919288635254, "stdev": 0.0020017961649168446, "stdev_": 4.519433398119305 }, "throughput": { "unit": "samples/s", "value": 135.46134648450715 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1330.511872, "max_global_vram": 68702.69952, "max_process_vram": 309017.64096, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.66364501953125, 0.044795848846435546, 0.043815372467041015, 0.043458736419677735, 0.04359377670288086 ], "count": 5, "total": 0.8393087539672852, "mean": 0.16786175079345705, "p50": 0.043815372467041015, "p90": 0.4161053512573243, "p95": 0.539875185394287, "p99": 0.6388910527038574, "stdev": 0.2478920769722678, "stdev_": 147.67633233927296 }, "throughput": { "unit": "samples/s", "value": 59.572832719374816 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1330.511872, "max_global_vram": 68702.69952, "max_process_vram": 309017.64096, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.66364501953125, 0.044795848846435546 ], "count": 2, "total": 0.7084408683776855, "mean": 0.35422043418884275, "p50": 0.35422043418884275, "p90": 0.6017601024627686, "p95": 0.6327025609970093, "p99": 0.6574565278244019, "stdev": 0.30942458534240724, "stdev_": 87.35368021638361 }, "throughput": { "unit": "samples/s", "value": 11.292403300108631 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1330.511872, "max_global_vram": 68702.69952, "max_process_vram": 309017.64096, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.043815372467041015, 0.043458736419677735, 0.04359377670288086 ], "count": 3, "total": 0.1308678855895996, "mean": 0.043622628529866536, "p50": 0.04359377670288086, "p90": 0.04377105331420898, "p95": 0.043793212890625, "p99": 0.04381094055175781, "stdev": 0.0001470184535130077, "stdev_": 0.33702337174008323 }, "throughput": { "unit": "samples/s", "value": 137.54329352006053 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1330.511872, "max_global_vram": 68702.69952, "max_process_vram": 309017.64096, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.66364501953125, 0.044795848846435546, 0.043815372467041015, 0.043458736419677735, 0.04359377670288086 ], "count": 5, "total": 0.8393087539672852, "mean": 0.16786175079345705, "p50": 0.043815372467041015, "p90": 0.4161053512573243, "p95": 0.539875185394287, "p99": 0.6388910527038574, "stdev": 0.2478920769722678, "stdev_": 147.67633233927296 }, "throughput": { "unit": "samples/s", "value": 59.572832719374816 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1330.511872, "max_global_vram": 68702.69952, "max_process_vram": 309017.64096, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.66364501953125, 0.044795848846435546 ], "count": 2, "total": 0.7084408683776855, "mean": 0.35422043418884275, "p50": 0.35422043418884275, "p90": 0.6017601024627686, "p95": 0.6327025609970093, "p99": 0.6574565278244019, "stdev": 0.30942458534240724, "stdev_": 87.35368021638361 }, "throughput": { "unit": "samples/s", "value": 11.292403300108631 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1330.511872, "max_global_vram": 68702.69952, "max_process_vram": 309017.64096, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.043815372467041015, 0.043458736419677735, 0.04359377670288086 ], "count": 3, "total": 0.1308678855895996, "mean": 0.043622628529866536, "p50": 0.04359377670288086, "p90": 0.04377105331420898, "p95": 0.043793212890625, "p99": 0.04381094055175781, "stdev": 0.0001470184535130077, "stdev_": 0.33702337174008323 }, "throughput": { "unit": "samples/s", "value": 137.54329352006053 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1345.601536, "max_global_vram": 68702.69952, "max_process_vram": 343468.568576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6369906616210937, 0.04272177505493164, 0.04119938278198242, 0.041418899536132814, 0.04106226348876953 ], "count": 5, "total": 0.8033929824829101, "mean": 0.16067859649658203, "p50": 0.041418899536132814, "p90": 0.399283106994629, "p95": 0.5181368843078612, "p99": 0.6132199061584472, "stdev": 0.23815676352309945, "stdev_": 148.21934514978508 }, "throughput": { "unit": "samples/s", "value": 62.23604274644458 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1345.601536, "max_global_vram": 68702.69952, "max_process_vram": 343468.568576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6369906616210937, 0.04272177505493164 ], "count": 2, "total": 0.6797124366760254, "mean": 0.3398562183380127, "p50": 0.3398562183380127, "p90": 0.5775637729644776, "p95": 0.6072772172927856, "p99": 0.6310479727554321, "stdev": 0.2971344432830811, "stdev_": 87.42945612004615 }, "throughput": { "unit": "samples/s", "value": 11.769683131181367 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1345.601536, "max_global_vram": 68702.69952, "max_process_vram": 343468.568576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.04119938278198242, 0.041418899536132814, 0.04106226348876953 ], "count": 3, "total": 0.12368054580688478, "mean": 0.041226848602294926, "p50": 0.04119938278198242, "p90": 0.041374996185302736, "p95": 0.04139694786071777, "p99": 0.041414509201049804, "stdev": 0.00014688566082456773, "stdev_": 0.3562864148107387 }, "throughput": { "unit": "samples/s", "value": 145.53622708057304 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1345.601536, "max_global_vram": 68702.69952, "max_process_vram": 343468.568576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6369906616210937, 0.04272177505493164, 0.04119938278198242, 0.041418899536132814, 0.04106226348876953 ], "count": 5, "total": 0.8033929824829101, "mean": 0.16067859649658203, "p50": 0.041418899536132814, "p90": 0.399283106994629, "p95": 0.5181368843078612, "p99": 0.6132199061584472, "stdev": 0.23815676352309945, "stdev_": 148.21934514978508 }, "throughput": { "unit": "samples/s", "value": 62.23604274644458 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1345.601536, "max_global_vram": 68702.69952, "max_process_vram": 343468.568576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6369906616210937, 0.04272177505493164 ], "count": 2, "total": 0.6797124366760254, "mean": 0.3398562183380127, "p50": 0.3398562183380127, "p90": 0.5775637729644776, "p95": 0.6072772172927856, "p99": 0.6310479727554321, "stdev": 0.2971344432830811, "stdev_": 87.42945612004615 }, "throughput": { "unit": "samples/s", "value": 11.769683131181367 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1345.601536, "max_global_vram": 68702.69952, "max_process_vram": 343468.568576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.04119938278198242, 0.041418899536132814, 0.04106226348876953 ], "count": 3, "total": 0.12368054580688478, "mean": 0.041226848602294926, "p50": 0.04119938278198242, "p90": 0.041374996185302736, "p95": 0.04139694786071777, "p99": 0.041414509201049804, "stdev": 0.00014688566082456773, "stdev_": 0.3562864148107387 }, "throughput": { "unit": "samples/s", "value": 145.53622708057304 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1342.992384, "max_global_vram": 68702.69952, "max_process_vram": 442717.55264, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.648900634765625, 0.07283110046386719, 0.0725583038330078, 0.07257334136962891, 0.0722315902709961 ], "count": 5, "total": 0.939094970703125, "mean": 0.187818994140625, "p50": 0.07257334136962891, "p90": 0.4184728210449219, "p95": 0.5336867279052733, "p99": 0.6258578533935546, "stdev": 0.23054089882698955, "stdev_": 122.74631747541866 }, "throughput": { "unit": "samples/s", "value": 53.24275132957393 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1342.992384, "max_global_vram": 68702.69952, "max_process_vram": 442717.55264, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.648900634765625, 0.07283110046386719 ], "count": 2, "total": 0.7217317352294922, "mean": 0.3608658676147461, "p50": 0.3608658676147461, "p90": 0.5912936813354492, "p95": 0.6200971580505371, "p99": 0.6431399394226074, "stdev": 0.2880347671508789, "stdev_": 79.81768102778277 }, "throughput": { "unit": "samples/s", "value": 11.084450924769444 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1342.992384, "max_global_vram": 68702.69952, "max_process_vram": 442717.55264, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.0725583038330078, 0.07257334136962891, 0.0722315902709961 ], "count": 3, "total": 0.21736323547363282, "mean": 0.07245441182454428, "p50": 0.0725583038330078, "p90": 0.07257033386230469, "p95": 0.0725718376159668, "p99": 0.0725730406188965, "stdev": 0.00015767818581132, "stdev_": 0.21762399533813587 }, "throughput": { "unit": "samples/s", "value": 82.8106922533525 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "5", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.482432, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0", "optimum_benchmark_commit": null, "transformers_version": "4.45.1", "transformers_commit": null, "accelerate_version": "0.34.2", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.0", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1342.992384, "max_global_vram": 68702.69952, "max_process_vram": 442717.55264, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.648900634765625, 0.07283110046386719, 0.0725583038330078, 0.07257334136962891, 0.0722315902709961 ], "count": 5, "total": 0.939094970703125, "mean": 0.187818994140625, "p50": 0.07257334136962891, "p90": 0.4184728210449219, "p95": 0.5336867279052733, "p99": 0.6258578533935546, "stdev": 0.23054089882698955, "stdev_": 122.74631747541866 }, "throughput": { "unit": "samples/s", "value": 53.24275132957393 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1342.992384, "max_global_vram": 68702.69952, "max_process_vram": 442717.55264, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.648900634765625, 0.07283110046386719 ], "count": 2, "total": 0.7217317352294922, "mean": 0.3608658676147461, "p50": 0.3608658676147461, "p90": 0.5912936813354492, "p95": 0.6200971580505371, "p99": 0.6431399394226074, "stdev": 0.2880347671508789, "stdev_": 79.81768102778277 }, "throughput": { "unit": "samples/s", "value": 11.084450924769444 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1342.992384, "max_global_vram": 68702.69952, "max_process_vram": 442717.55264, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.0725583038330078, 0.07257334136962891, 0.0722315902709961 ], "count": 3, "total": 0.21736323547363282, "mean": 0.07245441182454428, "p50": 0.0725583038330078, "p90": 0.07257033386230469, "p95": 0.0725718376159668, "p99": 0.0725730406188965, "stdev": 0.00015767818581132, "stdev_": 0.21762399533813587 }, "throughput": { "unit": "samples/s", "value": 82.8106922533525 }, "energy": null, "efficiency": null }

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
2