jtlicardo's picture
Librarian Bot: Add base_model information to model (#2)
12d36d8
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
widget:
  - text: >-
      The process starts when the customer enters the shop. The customer then
      takes the product from the shelf. The customer then pays for the product
      and leaves the store.
    example_title: Example 1
  - text: >-
      The process begins when the HR department hires the new employee. Next,
      the new employee completes necessary paperwork and provides documentation
      to the HR department. After the initial task, the HR department performs
      a  decision to determine the employee's role and department assignment.
      The employee is trained by the Sales department. After the training, the
      Sales department assigns the employee a sales quota and performance goals.
      Finally, the process ends with an 'End' event, when the employee begins
      their role in the Sales department.
    example_title: Example 2
  - text: >-
      A customer places an order for a product on the company's website. Next,
      the customer service department checks the availability of the product and
      confirms the order with the customer. After the initial task, the
      warehouse  processes the order. If the order is eligible for same-day
      shipping, the warehouse staff picks and packs the order, and it is sent to
      the shipping department. After the order is packed, the shipping
      department delivers the order to the customer. Finally, the process ends
      with an 'End' event, when the customer receives their order.
    example_title: Example 3
base_model: bert-base-cased
model-index:
  - name: bert-finetuned-v4
    results: []

bpmn-information-extraction

This model is a fine-tuned version of bert-base-cased on a dataset containing 90 textual process descriptions.

The dataset contains 5 target labels:

  • AGENT
  • TASK
  • TASK_INFO
  • PROCESS_INFO
  • CONDITION

It achieves the following results on the evaluation set:

  • Loss: 0.2909
  • Precision: 0.8557
  • Recall: 0.9247
  • F1: 0.8889
  • Accuracy: 0.9285

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
2.0586 1.0 10 1.5601 0.1278 0.1559 0.1404 0.4750
1.3702 2.0 20 1.0113 0.3947 0.5645 0.4646 0.7150
0.8872 3.0 30 0.6645 0.5224 0.6882 0.5940 0.8051
0.5341 4.0 40 0.4741 0.6754 0.8280 0.7440 0.8541
0.3221 5.0 50 0.3831 0.7523 0.8817 0.8119 0.8883
0.2168 6.0 60 0.3297 0.7731 0.8978 0.8308 0.9079
0.1565 7.0 70 0.2998 0.8195 0.9032 0.8593 0.9128
0.1227 8.0 80 0.3227 0.8038 0.9032 0.8506 0.9099
0.0957 9.0 90 0.2840 0.8431 0.9247 0.8821 0.9216
0.077 10.0 100 0.2914 0.8252 0.9140 0.8673 0.9216
0.0691 11.0 110 0.2850 0.8431 0.9247 0.8821 0.9285
0.059 12.0 120 0.2886 0.8564 0.9301 0.8918 0.9285
0.0528 13.0 130 0.2838 0.8564 0.9301 0.8918 0.9305
0.0488 14.0 140 0.2881 0.8515 0.9247 0.8866 0.9305
0.049 15.0 150 0.2909 0.8557 0.9247 0.8889 0.9285

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.0+cu116
  • Datasets 2.8.0
  • Tokenizers 0.13.2