Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type string to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1766, in _prepare_split_single
                  writer.write(example, key)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 500, in write
                  self.write_examples_on_file()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 458, in write_examples_on_file
                  self.write_batch(batch_examples=batch_examples)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 568, in write_batch
                  arrays.append(pa.array(typed_sequence))
                File "pyarrow/array.pxi", line 247, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 112, in pyarrow.lib._handle_arrow_array_protocol
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 208, in __arrow_array__
                  out = cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in cast_array_to_feature
                  arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in <listcomp>
                  arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2075, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature.feature)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1962, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type string to null
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1775, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 599, in finalize
                  self.write_examples_on_file()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 458, in write_examples_on_file
                  self.write_batch(batch_examples=batch_examples)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 568, in write_batch
                  arrays.append(pa.array(typed_sequence))
                File "pyarrow/array.pxi", line 247, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 112, in pyarrow.lib._handle_arrow_array_protocol
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 208, in __arrow_array__
                  out = cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in cast_array_to_feature
                  arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in <listcomp>
                  arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2075, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature.feature)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1962, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type string to null
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1524, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1627, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1784, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

__key__
string
__url__
string
json
dict
path
unknown
000000000
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "NoIn this article we reflect on the relatively recent emphasis on Palestinian children's mental health and well-being in the context of exposure to chronic warlike conditions, as we position this trend within the larger framework of the generations-long history of political turmoil and suffering. We describe how a process that started with no attention to psychosocial health of children in relation to exposure to dispossession, expulsion, occupation, repression and military attacks, proceeded with a focus on presumed mental disorders, and the more recent approach of designing context appropriate and community-based psychosocial interventions", "authors": [ "Rabaia, Y.", "Saleh, Mahasin F.", "Giacaman, R." ], "contributors": [], "coreId": "153514001", "datePublished": "2014-01-01T00:00:00", "doi": "10.1111/chso.12061", "downloadUrl": null, "enrichments": { "citationCount": null, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": null, "fullTextIdentifier": null, "identifiers": [], "issn": null, "journals": [], "language": null, "magId": null, "oai": "oai:bradscholars.brad.ac.uk:10454/9876", "pdfHashValue": null, "publisher": "'Wiley'", "relations": [ "http://dx.doi.org/10.1111/chso.12061" ], "subjects": [ "Article", "No full-text available in the repository" ], "title": "Sick or Sad? Supporting Palestinian Children Living in Conditions of Chronic Political Violence", "topics": [ "Social suffering; Mental health; Trauma; Occupied Palestinian territory; Children; Post-traumatic stress reactions; War; Experiences" ], "urls": [ "http://hdl.handle.net/10454/9876" ], "year": 2014 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000001
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "No90009335School leavers with intellectual disabilities (ID) often face difficulties in making a smooth transition from school to college, employment or more broadly to adult life. The transition phase is traumatic for the young person with ID and their families as it often results in the loss of friendships, relationships and social networks. The aim of this study was to explore the family carers' views and experiences on transition from school to college or to adult life with special reference to ethnicity. Forty-three families (consisting of 16 White British, 24 Pakistani, 2 Bangladeshi and one Black African) were interviewed twice using a semi-structured interview schedule. The carers were interviewed twice, Time 1 (T1) and Time 2 (T2), T2 being a year later to observe any changes during transition. The findings indicate that although transition planning occurred it was relatively later in the young person's school life. Parents were often confused about the process and had limited information about future options for their son or daughter. All family carers regardless of ethnicity, reported lack of information about services and expressed a sense of being excluded. South Asian families experienced more problems related to language, information about services, culture and religion. The majority of families lacked knowledge and awareness of formal services and the transition process. Socio-economic status, high levels of unemployment and caring for a child with a disability accounted for similar family experiences, regardless of ethnic background. The three key areas relevant for ethnicity are interdependence, religion and assumptions by service providers", "authors": [ "Raghavan, R.", "Pawson, Nicole", "Small, Neil A." ], "contributors": [], "coreId": "153513954", "datePublished": "2013-01-01T00:00:00", "doi": "10.1111/j.1365-2788.2012.01588.x", "downloadUrl": null, "enrichments": { "citationCount": null, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": null, "fullTextIdentifier": null, "identifiers": [], "issn": null, "journals": [], "language": null, "magId": null, "oai": "oai:bradscholars.brad.ac.uk:10454/9794", "pdfHashValue": null, "publisher": "'Wiley'", "relations": [ "http://dx.doi.org/10.1111/j.1365-2788.2012.01588.x" ], "subjects": [ "Article", "No full-text available in the repository" ], "title": "Family carers' perspectives on post-school transition of young people with intellectual disabilities with special reference to ethnicity", "topics": [ "Acculturation", "; Adolescent", "; Adult", "; African continental ancestry group", "; Asian continental ancestry group", "; Caregivers", "; Employment", "; Ethnic groups", "; European continental ancestry group", "; Female", "; Great Britain", "; Humans", "; Intellectual disability", "; Life change events", "; Male", "; Qualitative research", "; Schools", "; Social Behavior", "; Social support", "; Young adult", "; Children", "; Culture", "; Ethnicity", "; Family", "; Transition" ], "urls": [ "http://hdl.handle.net/10454/9794" ], "year": 2013 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000002
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "YesMultimodal biometric systems have been widely\r\napplied in many real-world applications due to its ability to\r\ndeal with a number of significant limitations of unimodal\r\nbiometric systems, including sensitivity to noise, population\r\ncoverage, intra-class variability, non-universality, and\r\nvulnerability to spoofing. In this paper, an efficient and\r\nreal-time multimodal biometric system is proposed based\r\non building deep learning representations for images of\r\nboth the right and left irises of a person, and fusing the\r\nresults obtained using a ranking-level fusion method. The\r\ntrained deep learning system proposed is called IrisConvNet\r\nwhose architecture is based on a combination of Convolutional\r\nNeural Network (CNN) and Softmax classifier to\r\nextract discriminative features from the input image without\r\nany domain knowledge where the input image represents\r\nthe localized iris region and then classify it into one of N\r\nclasses. In this work, a discriminative CNN training scheme\r\nbased on a combination of back-propagation algorithm and\r\nmini-batch AdaGrad optimization method is proposed for\r\nweights updating and learning rate adaptation, respectively.\r\nIn addition, other training strategies (e.g., dropout method,\r\ndata augmentation) are also proposed in order to evaluate\r\ndifferent CNN architectures. The performance of the proposed\r\nsystem is tested on three public datasets collected\r\nunder different conditions: SDUMLA-HMT, CASIA-Iris-\r\nV3 Interval and IITD iris databases. The results obtained\r\nfrom the proposed system outperform other state-of-the-art\r\nof approaches (e.g., Wavelet transform, Scattering transform,\r\nLocal Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases\r\nand a recognition time less than one second per person", "authors": [ "Al-Waisy, Alaa S.", "Qahwaji, Rami S.R.", "Ipson, Stanley S.", "Al-Fahdawi, Shumoos", "Nagem, Tarek A.M." ], "contributors": [], "coreId": "156963484", "datePublished": "2017-10-24T00:00:00", "doi": "10.1007/s10044-017-0656-1", "downloadUrl": "https://core.ac.uk/download/156963484.pdf", "enrichments": { "citationCount": 3, "documentType": { "confidence": 1, "type": "research" }, "references": [] }, "fullText": "Vol.:(0123456789) \nPattern Anal Applic \nDOI 10.1007/s10044-017-0656-1\nSHORT PAPER\nA multi‑biometric iris recognition system based on a deep \nlearning approach\nAlaa S. Al‑Waisy1 · Rami Qahwaji1 · Stanley Ipson1 · Shumoos Al‑Fahdawi1 · \nTarek A. M. Nagem1 \nReceived: 8 February 2017 / Accepted: 26 September 2017 \n© The Author(s) 2017. This article is an open access publication\nidentification rate of 100% on all the employed databases \nand a recognition time less than one second per person.\nKeywords Iris recognition · Multimodal biometric \nsystems · Deep learning · Convolutional Neural Network · \nSoftmax classifier · AdaGrad method\n1 Introduction\nBiometric systems are constantly evolving and promise tech-\nnologies that can be used in automatic systems for identify-\ning and/or authenticating a person’s identity uniquely and \nefficiently without the need for the user to carry or remember \nanything, unlike traditional methods like passwords, IDs [1, \n2]. In this regard, iris recognition has been utilized in many \ncritical applications, such as access control in restricted \nareas, database access, national ID cards, and financial ser-\nvices and is considered one of the most reliable and accu-\nrate biometric systems [3, 4]. Several studies have demon-\nstrated that the iris trait has a number of advantages over \nother biometric traits (e.g., face, fingerprint), which make \nit commonly accepted for application in high reliability and \naccurate biometric systems. Firstly, the iris trait represents \nthe annular region of the eye lying between the black pupil \nand the white sclera; this makes it completely protected \nfrom varied environmental conditions [5]. Secondly, it is \nbelieved that the iris texture provides a very high degree \nof uniqueness and randomness, so it very unlikely for any \ntwo iris patterns to be the same, even irises from identical \ntwins, or from the right and left eyes of an individual person. \nThis complexity in iris patterns is due to the distinctiveness \nand richness of the texture details within the iris region, \nincluding rings, ridges, crypts, furrows, freckles, zigzag \npatterns [4]. Thirdly, the iris trait provides a high degree \nAbstract Multimodal biometric systems have been widely \napplied in many real-world applications due to its ability to \ndeal with a number of significant limitations of unimodal \nbiometric systems, including sensitivity to noise, popula-\ntion coverage, intra-class variability, non-universality, and \nvulnerability to spoofing. In this paper, an efficient and \nreal-time multimodal biometric system is proposed based \non building deep learning representations for images of \nboth the right and left irises of a person, and fusing the \nresults obtained using a ranking-level fusion method. The \ntrained deep learning system proposed is called IrisCon-\nvNet whose architecture is based on a combination of Con-\nvolutional Neural Network (CNN) and Softmax classifier to \nextract discriminative features from the input image without \nany domain knowledge where the input image represents \nthe localized iris region and then classify it into one of N \nclasses. In this work, a discriminative CNN training scheme \nbased on a combination of back-propagation algorithm and \nmini-batch AdaGrad optimization method is proposed for \nweights updating and learning rate adaptation, respectively. \nIn addition, other training strategies (e.g., dropout method, \ndata augmentation) are also proposed in order to evaluate \ndifferent CNN architectures. The performance of the pro-\nposed system is tested on three public datasets collected \nunder different conditions: SDUMLA-HMT, CASIA-Iris-\nV3 Interval and IITD iris databases. The results obtained \nfrom the proposed system outperform other state-of-the-art \nof approaches (e.g., Wavelet transform, Scattering trans-\nform, Local Binary Pattern and PCA) by achieving a Rank-1 \n * Alaa S. Al-Waisy \n king_alaa87@yahoo.com\n1 School of Electrical Engineering and Computer Science, \nUniversity of Bradford, Bradford, UK\n Pattern Anal Applic\n1 3\nof stability during a person’s lifetime from one year of age \nuntil death. Finally, it is considered the most secure biom-\netric trait against fraudulent methods and spoofing attacks \nby an imposter where any attempt to change its patterns, \neven with a surgery, is a high risk, unlike the fingerprint \ntrait which is relatively easier to tamper with [6]. Despite \nthese advantages, implementing an iris recognition system is \nconsidered a challenging problem due to the iris acquisition \nprocess possibly acquiring irrelevant parts, such as eyelids, \neyelashes, pupil, and specular reflections which may greatly \ninfluence the iris segmentation and recognition outcomes.\nBroadly, biometric systems can be divided into two \nmain types: unimodal and multimodal biometric systems. \nUnimodal systems are based on using a single source of \ninformation (e.g., right iris, left iris, or face) to establish the \nperson’s identity. Although, these systems have been widely \nemployed in government and civilian sensitive applications \nwith a high level of security, they often suffer from a num-\nber of critical limitations and problems that can affect their \nreliability and performance. These critical limitations and \nproblems include: (1) noise in the sensed trait (2) non-uni-\nversality (3) intra-class variations (4) inter-class similarities \n(5) vulnerability to spoof attacks [7, 8]. All these drawbacks \nof unimodal systems can be efficiently addressed by systems \ncombining evidence from multiple sources of information \nfor identifying a person’s identity, which are then referred to \nas multimodal systems. Quite recently, considerable atten-\ntion has been paid to multimodal systems due to their ability \nto achieve better performance compared to unimodal sys-\ntems. Multimodal systems can produce sufficient popula-\ntion coverage by efficiently addressing problems related to \nthe enrollment phase such as non-universality. Furthermore, \nthese systems can provide a higher accuracy and a greater \nresistance to unauthorized access by an imposter than uni-\nmodal systems, due to the difficulty of spoofing or forging \nmultiple biometric traits of a legitimate user at the same \ntime. More details on addressing the other problems can be \nfound in [9]. In general, designing and implementing a mul-\ntimodal biometric system is a challenging task and a number \nof factors that have a great influence on the overall perfor-\nmance need to be addressed, including the cost, resources \nof biometric traits, accuracy, and fusion strategy employed. \nHowever, the most fundamental issue for the designer of the \nmultimodal system is choosing the most powerful biometric \ntraits from multiple sources in the system, and finding an \nefficient method of fusing them [10]. In multimodal bio-\nmetric systems, if the system operates in the identification \nmode, then the output of each classifier can be viewed as \na list of ranks of the enrolled candidates, which represents \na set of all possible matches sorted in descending order of \nconfidence. In this case, the fusion in the rank level can be \napplied using one of the ranking-level fusion methods to \nconsolidate the ranks produced by each individual classifier \nin order to deduce a consensus rank for each person. Then, \nthe scores output are sorted in descending order and the \nidentity with lowest score is presented as the right person.\nIn this paper, two discriminative learning techniques are \nproposed based on the combination of a Convolutional Neu-\nral Network (CNN) and the Softmax classifier as a multino-\nmial logistic regression classifier. CNNs are efficient and \npowerful Deep Neural Networks (DNNs) which are widely \napplied in image processing and pattern recognition with \nthe ability to automatically extract distinctive features from \ninput images even without a preprocessing step. Moreo-\nver, CNNs have a number of advantages compared to other \nDNNs, such as fast convergence, simpler architecture, adapt-\nability, and fewer free parameters. In addition, CNNs are \ninvariant to image deformations, such as translation, rota-\ntion, and scaling [11]. The Softmax classifier is a discrimi-\nnative classifier widely used for multi-class classification \npurposes. It was chosen for use on top of the CNN because \nit has produced outstanding results compared to other popu-\nlar classifiers, such as Support Vector Machines (SVMs)in \nterms of accuracy and speed [12]. In this work, the efficiency \nand learning capability of the proposed techniques are inves-\ntigated by employing a training methodology based on the \nback-propagation algorithm with the mini-batch AdaGrad \noptimization method. In addition, other training strategies \nare also used, including dropout and data augmentation \nto prevent the overfitting problem and increase the gener-\nalization ability of the neural network [13, 14], as will be \nexplained later on. The main contributions of this work can \nbe summarized as follows:\n1. An efficient and real-time multimodal biometric system \nis proposed based on fusing the results obtained from \nboth the right and left iris of the same person using one \nof the ranking-level fusion methods.\n2. An efficient deep learning system is proposed called \nIrisConvNet whose architecture is based on a combina-\ntion of a CNN and Softmax classifier to extract discrimi-\nnative features from the iris image without any domain \nknowledge and classify it into one of N classes. To the \nbest of our knowledge, this is the first work that investi-\ngates the potential use of CNNs for the iris recognition \nsystem, especially in the identification mode. It is worth \nmentioning that only two papers have been published \nrecently [15, 16], that investigate the performance of \nCNNs on the iris image. However, these two works \nhave addressed the biometric spoofing detection prob-\nlem with no more than three classes available, which \nis considered a simpler problem compared to the iris \nrecognition system where N class labels need to be cor-\nrectly predicted.\n3. A discriminative training scheme equipped with a num-\nber of training strategies is also proposed in order to \nPattern Anal Applic \n1 3\nevaluate different CNN architectures, including the \nnumber of layers, the number of filters layer, input \nimage size. To the best of our knowledge, this is the first \nwork that compares the performance of these parameters \nin iris recognition.\n4. The performance of the proposed system is tested on \nthree public datasets collected under different condi-\ntions: SDUMLA-HMT, CASIA-Iris-V3 Interval and \nIITD iris databases. The results obtained have dem-\nonstrated that the proposed system outperforms other \nstate-of-the-art of approaches, such as Wavelet trans-\nform, Scattering transform, Average Local Binary Pat-\ntern (ALBP), and PCA.\nThe remainder of the paper is organized as follows: In \nSect. 2, we briefly review some related works and the moti-\nvations behind the proposed study. Section 3 provides an \noverview of the proposed deep learning approaches. The \nimplementation of the proposed iris recognition system \nis presented in Sect. 4. Section 5 shows the experimental \nresults of the proposed system. Finally, conclusions and \ndirections for future work are reported in the last section.\n2 Related works and motivations\nIn 1993, the first successful and commercially available iris \nrecognition system was proposed by Daugman [17]. In this \nsystem, the inner and outer boundaries of the iris region are \ndetected using an integro-differential operator. Afterward \nthe iris template is transferred into normalized form using \nDaugman’s rubber sheet method. This is followed by using \na 2D Gabor filter to extract the iris features and the Ham-\nming distance for decision making. However, as reported in \n[18–20], the key limitation of Daugman’s system is that it \nrequires a high-resolution camera to capture the iris image \nand its accuracy significantly decreases under non-ideal \nimaging conditions due to the sensitivity of the iris localiza-\ntion stage to noise and different lighting conditions. In addi-\ntion to Daugman, many researchers have proposed iris rec-\nognition systems using various methods, among which the \nmost notable systems were proposed by Wildes [21], Boles \nand Boashash [22], Lim et al. [23], and Masek [24]. How-\never, most existing iris recognition systems claim to perform \nwell under ideal conditions using developed imagery setup \nto capture high-quality images, but the recognition rate may \nsubstantially decrease when using non-ideal data. Therefore, \nthe iris recognition system is still an open problem and the \nperformance of the state-of-the-art methods still has much \nroom for improvement.\nAs is well known, the success of any biometric system \ndefined as a classification and recognition system mainly \ndepends on the efficiency and robustness of the feature \nextraction and classification stages. In the literature, sev-\neral publications have documented the high accuracy and \nreliability of neural networks, such as the multilayer per-\nceptron (MLP), in many real-world pattern recognition and \nclassification applications [25, 26]. Inspired by a num-\nber of characteristics of such systems (e.g., a powerful \nmathematical model, the ability to learn from experience \nand robustness in handling noisy images), neural net-\nworks are considered as one of the simplest and powerful \nof classifiers [27]. However, traditional neural networks \nhave a number of drawbacks and obstacles that need to be \novercome. Firstly, the input image is required to undergo \nseveral different image processing stages, such as image \nenhancement, image segmentation, and feature extraction \nto reduce the size of the input data and achieve a satis-\nfactory performance. Secondly, designing a handcrafted \nfeature extractor needs a good domain knowledge and a \nsignificant amount of time. Thirdly, an MLP has difficulty \nin handling deformations of the input image, such as trans-\nlations, scaling, and rotation [28]. Finally, a large number \nof free parameters need to be tuned in order to achieve \nsatisfactory results while avoiding the overfitting problem. \nThe large number of these free parameters is due to the use \nof full connections between the neurons in a specific layer \nand all activations in the previous layer [29]. To overcome \nthese limitations and drawbacks, the use of deep learning \ntechniques was proposed. Deep learning can be viewed \nas an advanced subfield of machine learning techniques \nthat depend on learning high-level representations and \nabstractions using a structure composed of multiple non-\nlinear transformations. In deep learning, the hierarchy of \nautomatically learning features at multiple levels of repre-\nsentations can provide a good understanding of data such \nas image, text, and audio, without depending completely \non any domain knowledge and handcrafted features [11]. \nIn the last decade, deep learning has attracted much atten-\ntion from research teams with promising and outstanding \nresults in several areas, such as natural language process-\ning (NLP) [30], texture classification [31], object recogni-\ntion [14], face recognition [32], speech recognition [33], \ninformation retrieval [34], traffic sign classification [35].\n3 Overview of the proposed approaches\nIn this section, a brief description of the proposed deep \nlearning approach is given, which incorporates two discrimi-\nnative learning techniques: a CNN and a Softmax classi-\nfier. The main aim here is to inspect their internal structures \nand identify their strengths and weaknesses to enable the \nproposal of an iris recognition system that integrates the \nstrengths of these two techniques.\n Pattern Anal Applic\n1 3\n3.1 Convolutional Neural Network\nA Convolutional Neural Network (CNN) is a feed-forward \nmultilayer neural network, which differs from traditional \nfully connected neural networks by combining a number of \nlocally connected layers aimed at automated feature recogni-\ntion, followed by a number of fully connected layers aimed \nat classification [36]. The CNN architecture, as illustrated \nin Fig. 1, comprises several distinct layers including sets of \nlocally connected convolutional layers (with a specific num-\nber of different learnable kernels in each layer), subsampling \nlayers named pooling layers, and one or more fully con-\nnected layers. The internal structure of the CNN combines \nthree architectural concepts, which make the CNN success-\nful in different fields, such as image processing and pattern \nrecognition, speech recognition, and NLP. The first concept \nis applied in both convolutional and pooling layers, in which \neach neuron receives input from a small region of the previ-\nous layer called the local receptive field [27] equal in size to \na convolution kernel. This local connectivity scheme ensures \nthat the trained CNN produces strong responses to capture \nlocal dependencies and extracts elementary features in the \ninput image (e.g., edges, ridges, curves, etc.) which can play \na significant role in maximizing the inter-class variations and \nminimizing the intra-class variations, and hence increasing \nthe Correct Recognition Rate (CRR) of the iris recognition \nsystem. Secondly, the convolutional layer applies the sharing \nparameters (weights) scheme in order to control the model \ncapacity and reduce its complexity. At this point, a form of \ntranslational invariance is obtained using the same convolu-\ntion kernel to detect a specific feature at different locations \nin the iris image [37]. Finally, the nonlinear down sampling \napplied in the pooling layers reduces the spatial size of the \nconvolutional layer’s output and reduces the number of the \nfree parameters of the model. Together, these characteristics \nmake the CNN very robust and efficient at handling image \ndeformations and other geometric transformations, such as \ntranslation, rotation, and scaling [36]. In more detail, these \nlayers are: \n• Convolutional layer In this layer, the parameters \n(weights) consist of a set of learnable kernels that are \nrandomly generated and learned by the back-propagation \nalgorithm. These kernels have a few local connections, \nbut connect through the full depth of the previous layer. \nThe result of each kernel convolved across the whole \ninput image is called the activation (or feature) map, and \nthe number of the activation maps is equal to the number \nof applied kernels in that layer. Figure 1 shows a first \nconvolution layer consisting of 6 activation maps stacked \ntogether and produced from 6 kernels independently \nconvolved across the whole input image. Hence, each \nactivation map is a grid of neurons that share the same \nparameters. The activation map of the convolutional layer \nis defined as:\n \nHere, xi(r) and yj(r) are the ith input and the jth output \nactivation map, respectively. bj(r) is the bias of the jth out-\nput map and * denotes convolution. kij(r) is the convolution \nkernel between the ith input map and the jth output map. \nThe ReLU activation function (y = max (0,x)) is used here \nto add non-linearity to the network, as will be explained \nlater on.\n• Max-pooling layer Its main function is to reduce the spa-\ntial size of the convolutional layers’ output representa-\ntions, and it produces a limited form of the translational \ninvariance. Once a specific feature has been detected by \nthe convolutional layer, only its approximate location \nrelative to other features is kept. As shown in Fig. 1, each \ndepth slice of the input volume (convolutional layer’s \noutput) is divided into non-overlapping regions, and for \neach subregion the maximum value is taken. A com-\nmonly used form is max-pooling with regions of size \n(1)yj(r) = 𝐦𝐚𝐱\n(\n0, bj(r) +\n∑\ni\nkij(r) ∗ xi(r)\n)\nFig. 1 An illustration of the CNN architecture, where the gray and green squares refer to the activation maps and the learnable convolution ker-\nnels, respectively. The cross-lines between the last two layers refer to the fully connected neurons (color figure online)\nPattern Anal Applic \n1 3\n(2 × 2) and a stride of 2. The depth dimension of the \ninput volume is kept unchanged. The max-pooling layer \ncan be formulated as follows:\n \nHere, yi\nj,k\n represents a neuron in the ith output activation \nmap, which is computed over an (s × s) non-overlapping \nlocal region in the ith input map xi\nj,k\n.\n• Fully connected layers the output of the last convolu-\ntional or max-pooling layer is fed to a one or more fully \nconnected layers as in a traditional neural network. In \nthose layers, the outputs of all neurons in layer (l − 1) \nare fully connected to every neuron in layer l. The output \ny(l)(j) of neuron j in a fully connected layer l is defined \nas follows:\nwhere N(l−1) is the number of neurons in the previous layer \n(l‑1), w(l)(i, j) is the weight for the connection from neuron j \nin layer (l − 1) to neuron j in layer l, and b(l)(j) is the bias of \nneuron j in layer l. As for the other two layers, f (l) represents \nthe activation function of layer l.\n3.2 Softmax regression classifier\nThe classifier implemented in the fully connected part of the \nsystem, shown in Fig. 1, is the Softmax regression classifier, \nwhich is a generalized form of binary logistic regression \nclassifier intended to handle multi-class classification tasks. \nSuppose that there are K classes and n labeled training sam-\nples {(x1, y1),…, (xn, yk)}, where xi ∈ Rm is the ith training \nexample and yi ∈ {1…,K} is the class label of xi.\nThen, for a given test input xi, the Softmax classifier will \nproduce a K-dimensional vector (whose elements sum to \n1), where each element in the output vector refers to the \nestimated probability of each class label conditioned on this \ninput feature. The hypothesis h휽\n(\nxi\n)\n to estimate the probabil-\nity vector of each label, can be defined as follows:\n(2)yij,k = 퐦퐚퐱\n0≤m,n<s\n(\nxi\nj.s+m,k.s+n\n)\n(3)y(l)(j) = f (l)\n⎛⎜⎜⎝\nN(l−1)�\ni=1\ny(l−1)(i).w(l)(i, j) + b(l)(j)(3)\n⎞⎟⎟⎠\n(4)\nh휽\n�\nxi\n�\n=\n⎡⎢⎢⎢⎢⎢⎢⎣\np\n�\nyi = 1�xi;휽�\np\n�\nyi = 2�xi;휽�\n.\n.\n.\np\n�\nyi = K�xi;휽�\n⎤⎥⎥⎥⎥⎥⎥⎦\n=\n1∑K\nj=1\ne\n휽T\njxi\n⎡⎢⎢⎢⎢⎢⎢⎢⎣\ne\n휽T\n1\nxi\ne\n휽T\n2\nxi\n.\n.\n.\ne\n휽T\nKxi\n⎤⎥⎥⎥⎥⎥⎥⎥⎦\nHere, \n(\n휽1,휽2,… ,휽K\n)\n are the parameters to be randomly \ngenerated and learned by the back-propagation algorithm. \nThe cost function used for the Softmax classifier is named \nas cross-entropy loss function and can be defined as follows:\nHere, 1{·} is a logical function, that is, when a true statement \nis given, 1{·} = 1, otherwise 1{·} = 0. The second term is \na weight decay term that tends to reduce the magnitude of \nthe weights, and prevents the overfitting problem. Finally, \nthe gradient descent method is used to solve the minimum \nof the J(휽), as follows:\nIn Eq. 5, the gradients are computed for a single class j \nand for each iteration the parameters will be updated for any \ngiven training pair (xi, yi) as follows: 휽퐧퐞퐰 = 휽퐨퐥퐝 − 휶∇휽J(휽), \nwhere the symbol 휶 refers to the learning rate [38].\n4 The proposed system\nAn overview of the proposed iris recognition system is \nshown in Fig. 2. Firstly, a preprocessing procedure is imple-\nmented based on employing an efficient and automatic iris \nlocalization to carefully detect the iris region from the back-\nground and all extraneous features, such as pupil, sclera, \neyelids, eyelashes, and specular reflections. In this work, the \nmain reason for defining the iris area as the input to CNN \ninstead of the whole eye image is to reduce the computa-\ntional complexity of the CNN. Another reason is to avoid \nthe performance degradation of the matching and feature \nextraction processes resulting from the appearance of eyelids \nand eyelashes. After detection, the iris region is transformed \ninto a normalized form with fixed dimensions in order to \nallow direct comparison between two iris images with ini-\ntially different sizes.\nThe normalized iris image is further used to provide \nrobust and distinctive iris features by employing the CNN \nas an automatic feature extractor. Then, the matching score \nis obtained using the generated feature vectors from the last \nfully connected layer as the input to the Softmax classi-\nfier. Finally, the matching scores of both the right and left \niris images are fused to establish the identity of the person \nwhose iris images are under investigation. During the train-\ning phase, different CNN configurations are trained on the \ntraining set and tested on the validation set to obtain the \nbest one with the smallest error that we call IrisConvNet. \n(5)\nJ(휽) = −\n1\nm\n�\nm�\ni=1\nK�\nj=1\n1\n�\nyi = j\n�\nlog\neT\njxi∑K\nl=1\neT\nlxi\n�\n+\n흀\n2\nK�\ni=1\nn�\nj=0\n휽2\nij\n(6)\n∇휽j J(휽) = −\n1\nm\nm∑\ni=1\n[\nxi\n(\n1\n{\nyi = j\n}\n− p\n(\nyi = j|xi;휽))] + 흀휽j\n Pattern Anal Applic\n1 3\nIts performance on test data is then assessed in the testing \nphase.\n4.1 Iris localization\nPrecise localization of the iris region plays an important role \nin improving the accuracy and reliability of an iris recogni-\ntion system, as the performance of the following stages of \nthe system directly depends on the quality of the detected \niris region. The iris localization procedure aims to detect \nthe two iris region boundaries: the inner (pupil–iris) bound-\nary and the outer (iris–sclera) boundary. However, the task \nbecomes more difficult, when parts of the iris are covered \nby eyelids and eyelashes. In addition, changes in the light-\ning conditions during the acquisition process can affect the \nquality of the extracted iris region and then affect the iris \nlocalization and the recognition outcome. In this section, \na brief description of our iris localization procedure [39] is \ngiven where an efficient and automatic algorithm is proposed \nfor detecting the inner and outer iris boundaries. As depicted \nin Fig. 3, firstly, a reflection mask is calculated after the \ndetection of all the specular reflection spots in the eye image, \nto aid their removal. Then, these detected spots are painted \nusing a pre-defined reflection mask and a roifill MATLAB \nfunction. Next, the inner and outer boundaries are detected \nby employing an efficient enhancement procedure, which is \nbased on the 2D Gaussian filter and histogram equalization \noperations in order to reduce the computational complex-\nity of the Circular Hough Transform (CHT), smooth the \neye image and to enhance the contrast between the iris and \nsclera region. This is followed by applying a coherent CHT \nto obtain the center coordinates and radius of the pupil and \niris circles. Finally, the upper and lower eyelids boundaries \nare detected using a fast and accurate eyelid detection algo-\nrithm, which employs an anisotropic diffusion filter with \nRadon transform to fit them as straight lines. For further \ndetails on the iris localization procedure, refer to Reference \n[39].\n4.2 Iris normalization\nOnce, the iris boundaries have been detected, iris normali-\nzation is implemented to produce a fixed dimension fea-\nture vector that allows comparison between two different \niris images. The main advantage of the iris normalization \nprocess is to remove the dimensional inconsistencies that \ncan occur due to stretching of the iris region caused by pupil \ndilation with varying levels of illumination. Other causes \nof dimensional inconsistencies include, changing imaging \ndistance, elastic distortion in the iris texture that can affect \nFig. 2 An overview of the \nproposed multi-biometric iris \nrecognition system\nFig. 3 Overall stages of the proposed iris localization procedure\nPattern Anal Applic \n1 3\nthe iris matching outcome, rotation of the camera or eye \nand so forth. To address all these mentioned issues the iris \nnormalization process is applied using Daugman’s rubber \nsheet mapping to transform the iris image from Cartesian \ncoordinates to polar coordinates, as shown in Fig. 4. Daug-\nman’s mapping takes each point (x, y) within the iris region \nto a pair of normalized non-concentric polar coordinates \n(r, θ) where r is on the interval [0, 1] and θ is the angle on \nthe interval [0, 2π]. This mapping of the iris region can be \ndefined mathematically as follows:\nHere I(x, y) is the intensity value at (x, y) in the iris region \nimage. The parameters xp, xl, yp, and yl are the coordinates \nof the pupil and iris boundaries along the 휽 direction.\n4.3 Deep learning for iris recognition\nOnce a normalized iris image is obtained, feature extrac-\ntion and classification is performed using a deep learning \napproach that combines a CNN and a Softmax classifier. \nIn this work, the structure of the proposed CNN involves a \ncombination of convolutional layers and subsampling max-\npooling. The top layers in the proposed CNN are two fully \nconnected layers for the classification task. Then, the output \nof the last fully connected layer is fed into the Softmax clas-\nsifier, which produces a probability distribution over the N \nclass labels. Finally, a cross-entropy loss function, a suitable \nloss function for the classification task, is used to quantify \nthe agreement between the predicted class scores and the tar-\nget labels and calculate the cost value for different configura-\ntions of CNN. In this section, the proposed methodology for \nfinding the best CNN configuration to be used for the iris \nrecognition task is explained. Based on domain knowledge \nfrom the literature, there are three main aspects that have a \ngreat influence on the performance of a CNN, which need \n(7)\nI(x (r,휽), y (r,휽))→ I (r,휽)\nx (r,휽) = (1 − r)xp(휽)rxl(휽)\ny (r,휽) = (1 − r)yp(휽) ryl(휽)\nto be investigated. These include: (1) training methodology, \n(2) network configuration or architecture (3) input image \nsize. The performance of some carefully proposed training \nstrategies, including the dropout method, AdaGrad method, \nand data augmentation, is investigated as part of this work. \nThese training strategies have a significant role in prevent-\ning the overfitting problem during the learning process and \nincreasing the generalization ability of the neural network. \nThese three aspects are described in more detail in the next \nsection.\n4.3.1 Training methodology\nIn this work, all of the experiments were carried out, given a \nparticular set of sample data, using 60% randomly selected sam-\nples for training and the remaining 40% for testing. The training \nmethodology as in [40, 41], starts training a particular CNN \nconfiguration by dividing the training set into four sets after \nthe data augmentation procedure is implemented: three sets are \nused to train the CNN and the last one is used as a validation \nset for testing the generalization ability of the network during \nthe learning process and storing the weights configuration that \nperforms best on it with minimum validation error, as shown in \nFig. 5. In this work, the training procedure is performed using \nthe back-propagation algorithm with the mini-batch AdaGrad \noptimization method introduced in [42], where each set of the \nthree training data is divided into mini-batches and the training \nerrors are calculated upon each mini-batch in the Softmax layer \nand get back-propagated to the lower layers.\nAfter each epoch (passing through the entire training \nsamples), the validation set is used to measure the accuracy \nof the current configuration by calculating the cost value \nand the Top-1 validation error rate. Then, according to the \nAdaGrad optimization method, the learning rate is scaled \nby a factor equal to the square root of the sum of squares of \nthe previous gradients as shown in Eq. 8. An initial learn-\ning rate must be selected; hence, two of the most common \nused learning rate values are analyzed herein, as shown in \n(Sect. 5.2.1). To avoid the overfitting problem, the training \nprocedure is stopped as soon as the cost value and the error \non the validation set start to rise again, which means that \nthe network starts to overfit the training set. This process \nis one of the regularization methods called the early stop-\nping procedure. In this work, different numbers of epochs \nare investigated as explained in (Sect. 5.2.1). Finally, after \nthe training procedure is finished, the testing set is used to \nmeasure the efficiency of the final configuration obtained in \npredicting the unseen samples by calculating the identifica-\ntion rate at Rank-1 as an optimization objective, which is \nmaximized during the learning process. Then, the Cumula-\ntive Match Characteristic (CMC) curve is used to visualize \nthe performance of the best configuration obtained as the \nFig. 4 Daugman’s rubber sheet model to transfer the iris region from \nthe Cartesian coordinates to the polar coordinates\n Pattern Anal Applic\n1 3\niris identification system. The main steps of the proposed \ntraining methodology are summarized as follows:\n1. Split the dataset into three sets: Training, Validation and \nTest set.\n2. Select a CNN architecture and a set of training param-\neters.\n3. Train the each CNN configuration using the training set.\n4. Evaluate each CNN configuration using the validation \nset.\n5. Repeat steps 3 through 4 using N epochs.\n6. Select the best CNN configuration with minimal error \non the validation set.\n7. Evaluate the best CNN configuration using the test set.\n4.3.2 Network architecture\nOnce the parameters of the training methodology are deter-\nmined (e.g., learning rate, number of epochs, etc.), it is used \nto identify the best network architecture. From the literature, \nit appears that choosing the network architecture is still an \nopen problem and is application dependent. The main con-\ncern in finding the best CNN architecture is the number of \nthe layers to employ transforming from the input image to \na high-level feature representations, along with the number \nof convolution filters in each layer. Therefore, some CNN \nconfigurations using the proposed training methodology \nare evaluated by varying the number of convolutional and \npooling layers, and the number of filters in each layer, as \nexplained in (Sect. 5.2.2). To reduce the number of configu-\nrations to be evaluated, the number of the fully connected \nlayers is fixed at two as in [43, 44], and the size of filters for \nboth the convolutional and pooling layers is kept as the same \nas in [15] except in the first convolutional layer where it is \nset to (3 × 3) pixels, to avoid a rapid decline in the amount \nof input data.\n4.3.3 Input image size\nThe input image size is one of the hyper-parameters in the \nCNN that has a significant influence in the speed and the \naccuracy of the neural network. In this work, the influence of \ninput image size is investigated using the sizes (64 × 64) pix-\nels and (128 × 128) pixels (generated from original images \nof larger size as described in the Data Augmentation section \nbelow), given that for lower values than the former, the iris \nFig. 5 An overview of the proposed training methodology to find the best CNN architecture. Where CRR refers to the correction recognition \nrate at Rank-1\nPattern Anal Applic \n1 3\npatterns become invisible, while for higher values than the \nlatter, the larger memory requirements and higher compu-\ntational costs are potential problems. In order to control the \nspatial size of the input and output volumes, a zero padding \n(of 1 pixel) is applied only to the input layer.\n4.3.4 Training strategies\nIn this section, a number of carefully designed training tech-\nniques and strategies are used to prevent overfitting during \nthe learning process and increase the generalization ability \nof the neural network. These techniques are: \n1. Dropout method this is a regularization method recently \nintroduced by Srivastava et al. [13] that can be used to \nprevent neural networks from overfitting the training set. \nThe dropout technique is implemented in each training \niteration by completely ignoring individual nodes with \nprobability of 0.5, along with their connections. This \nmethod decreases the complex coadaptations of nodes \nby preventing the interdependencies from emerging \nbetween them. The nodes which are dropped do not par-\nticipate in both forward and backward passing. There-\nfore, as shown in Fig. 6b, only a reduced network is left \nand is trained on the input data in that training itera-\ntion. As a result, the process of training a neural net-\nwork with n nodes will end up with a collection of (2n) \npossible “thinned” neural networks that share weights. \nThis allows the neural network to avoid overfitting, \nlearn more robust features that generalize well to new \ndata, and speed up the training process. Furthermore, \nit provides an efficient way of combining many neu-\nral networks with different architectures, which make \nthe combination more beneficial. In the testing phase, \nit is not practical to average the predictions from (2n) \n“thinned” neural networks, especially for large value \nof n. However, this can be easily addressed by using a \nsingle network without dropout and with the outgoing \nweights of each node multiplied by a factor of 0.5 to \nensure that the output of any hidden node is the same as \nin the training phase. In this work, the dropout method \nis applied only to the two fully connected layers, as they \ninclude most of the parameters in the proposed CNN \nand are more vulnerable to overfitting. More informa-\ntion on the dropout method can be found in [13].\n2. AdaGrad algorithm in the iris recognition system, infre-\nquent features can significantly contribute to improving \nthe accuracy of the system through minimizing intra-class \nvariations and inter-class similarities, which is caused \nby several factors, including pupil dilation/constriction, \neyelid/eyelash occlusion, and specular reflections spots. \nHowever, in the standard Stochastic Gradient Descent \n(SGD) algorithm for learning rate adaptation, both infre-\nquent and frequent features are weighted equally in terms \nof learning rate, which means that the influence of the \ninfrequent features is practically discounted. To counter \nthis, the AdaGrad algorithm is implemented to increase \nthe learning rate for more sparse data, which is translated \ninto a larger update for infrequent features, and decreased \nlearning rate for less sparse data, which is translated into \na smaller update for the frequent features. The AdaG-\nrad algorithm also has the advantage of being simpler to \nimplement than the SGD algorithm [42]. The AdaGrad \ntechnique has been shown to improve the convergence \nperformance stability of neural networks over the SGD in \nmany different applications (e.g., NLP, document classi-\nfication) in which the infrequent features are more useful \nthan the more frequent features. The AdaGrad algorithm \ncomputes the learning rate η for every parameter \n(\n휽i\n)\n at \neach time step t based on the previous gradients of the \nsame parameter as follows:\nHere, gt,i = ∇휽J\n(\n휽i\n)\n is the gradient of the objective function \nat time step t, and Gt,ii =\n∑t\nr=1\ng2\nt,i\n is the diagonal matrix \nwhere each diagonal element (i, i) is the sum of the squares \nof the gradients for the parameter \n(\n휽i\n)\n at time step t. Finally, \ne is a small constant to avoid division by zero. More details \non the AdaGrad algorithm can be found in [42].\n(8)휽\n(t+1)\ni\n= 휽\n(t)\ni\n−\n휼√\nGt,ii + e\n. gt,i\nFig. 6 An illustration of applying the dropout method to a standard \nneural network: a A standard neural network with 2 hidden layers \nbefore applying dropout method. b An example of a reduced neural \nnetwork after applying dropout method. The crossed units and the \ndashed connections have been dropped\n Pattern Anal Applic\n1 3\n3. Data augmentation it is well known that DNNs need \nto be trained on a large number of training samples to \nachieve satisfactory prediction and prevent overfitting \n[45]. Data augmentation is a simple and commonly used \nmethod to artificially enlarge the dataset by methods \nsuch as random crops, intensity variations, horizontal \nflipping. In this work, data augmentation is imple-\nmented similarly to [14]. Initially, a given rectangular \nimage is rescaled so that the longest side is reduced to \nthe length of the shortest side instead of cropping out \na square central patch from the rectangle image as in \n[14], which can lose crucial features from the iris image. \nThen, five image regions are cropped from the rescaled \nimage corresponding to the four corners and central \nregion. In addition, their horizontally flipped versions \nare also acquired. As a result, ten image patches are gen-\nerated from each input image. During prediction time, \nthe same ten image patches are extracted from each \ninput image, and the mean of the predictions on the ten \npatches is taken at the Softmax layer. In this paper, the \nperformance of the CNN is evaluated using two different \ninput image sizes so the data augmentation procedure is \nimplemented twice, once for each size. Image patches of \nsize (64 × 64) pixels are extracted from original input \nimages of size (256 × 70) pixels, and image patches of \nsize (128 × 128) pixels are extracted from original input \nimages of size (256 × 135) pixels.\n4. The ReLU activation function is applied on the top of \nthe convolutional and fully connected layers in order to \nadd non-linearity to the network. As reported by Kriz-\nhevsky [14], the ReLU f (x) = 퐦퐚퐱(0, x) has been found \nto be crucial to learning when using DNNs, especially \nfor CNNs, compared to other activation functions, such \nas the sigmoid and tangent. In addition, it results in neu-\nral network training several times faster than with other \nactivation functions, without making a significant dif-\nference to generalization accuracy.\n5. Weight decay is used in the learning process as an addi-\ntional term in calculating the cost function and updating \nthe weights. Here, the weight decay parameter is set to \n0.0005 as in [46].\n4.4 Ranking‑level fusion\nIn this paper, rank level fusion is employed where each indi-\nvidual classifier produces a ranked list of possible matching \nscores for each user. (A higher rank indicates a better match). \nThen these ranks are integrated to create a new ranking list \nthat is used to make the final decision on user identity. Sup-\npose, that there are P persons registered in the database and \nthe number of employed classifiers is C. Let ri, j is the rank \nassigned to jth person in the database by the ith classifier, \ni = 1,…,C and j = 1,…,P. Then, the consensus ranks Rc for \na particular class are obtained using the following fusion \nmethods:\n1. Highest rank method is a useful method for fusing the \nranks only when the number of registered users is large \ncompared to the number of classifiers, which is usually \nthe scenario in the identification system. The consensus \nrank of a particular class is computed as the lowest rank \ngenerated by different classifiers (minimum ri, j value) \nas follows:\nThe main advantage of this method is the ability of \nexploiting the strength of each classifier effectively, where as \nlong as there is at least one classifier that assigns a high rank \nri, j value to the right identity, it is very likely that the right \nidentity will receive the highest rank value after the reorder-\ning process. However, the final ranking list may have one or \nmore ties that can negatively affect the accuracy of the final \ndecision. In this work, the ties are broken by incorporating \na small factor epsilon (e), as described in [47] as follows:\nHere,\nHere, the value of ei is ensured to be small by assigning a \nlarge value to parameter K.\n2. Borda count method using this fusion method, the con-\nsensus rank of a query identity is computed as the sum \nof ranks assigned by individual classifiers indepen-\ndently, as follows:\nThe main advantage of the Borda count method is that it \nis very simple to implement without the need for any train-\ning phase. However, this method is highly susceptible to the \nimpact of weak classifiers, as it supposes that all the ranks \nproduced by the individual classifiers are statistically inde-\npendent and their performance is equally well [48].\n3. Logistic regression method is a generalized form of the \nBorda count method to solve the problem of the uniform \nperformance of the individual classifiers. The consensus \n(9)Rc = 퐦퐢퐧1≤i≤C ri,j\n(10)Rc = 퐦퐢퐧1≤i≤C ri,j + ei\n(11)ei =\nC∑\ni=1\nri,j∕K\n(12)Rc =\nC∑\ni=1\nri,j\nPattern Anal Applic \n1 3\nrank is calculated by sorting the users according to the \nsummation of their ranks obtained from individual clas-\nsifiers, as follows:\nHere, wi is the weight to be assigned to the ith classifier, \ni = 1,…,C, is determined by logistic regression. In this work, \nthe wi is assigned to be 0.5 for both the left and right iris \nimage. This method is very useful in the presence of dif-\nferent individual classifiers with significant differences in \ntheir performance. However, a training phase is needed to \nidentify the weight for each individual classifier, which can \nbe computationally expensive.\n5 Experimental results\nIn this section, a number of extensive experiments to assess \nthe effectiveness of the proposed deep learning approach for \niris recognition on the most challenging iris databases cur-\nrently available in the public domain are described. Three \niris databases, namely, SDUMLA-HMT [49], CASIA-Iris-\nV3 Interval [50], and IITD [51] are employed as testing \nbenchmarks and for comparing the results obtained with \ncurrent state-of-the-art approaches. In most cases, the iris \nimages in these databases were captured under different \nconditions of pupil dilation, eyelids/eyelashes occlusion, \nhead-tilt, slight shadow of eyelids, specular reflection, etc. \nThe SDUMLA-HMT iris database comprises 1060 images \ntaken from 106 subjects with each subject providing 5 left \nand 5 right iris images. In this database, all images were \ncaptured using an intelligent iris capture device with the dis-\ntance from the device to the eye between 6 cm and 32 cm. To \nthe best of our knowledge, this is the first work that uses all \nthe subjects in this database for the identification task. The \nCASIA-Iris-V3 Interval database comprises 2566 images \nfrom 249 subjects, which were captured with a self-devel-\noped close-up iris camera. In this database, the number of \nimages of each subject differs and 129 subjects have less \nthan 14 iris images. These were not used in the experiments. \n(13)Rc =\nC∑\ni=1\nwi∗ri,j\nThe IIT Delhi Iris database comprises 1120 iris images cap-\ntured from 224 subjects (176 males and 48 females), who \nare students and staff at IIT Delhi, New Delhi, India. For \neach person 5 iris images for each eye were captured using \nthree different cameras: JIRIS, JPC1000, and digital CMOS \ncameras. The basic characteristics of these three databases \nare summarized in Table 1.\n5.1 Iris localization accuracy\nAs explained in a previous paper [39], the performance \nof the proposed iris localization model was tested on two \ndifferent databases, and showed encouraging results with \noverall accuracies of 99.07 and 96.99% on the CASIA Ver-\nsion 1.0 and the SDUMLA-HMT databases, respectively. \nThe same evaluation procedure is applied herein in order \nto evaluate the performance of the iris localization model \non the CASIA-Iris-V3 and IITD databases. The iris locali-\nzation is considered accurate if and only if two conditions \nare fulfilled. Firstly, the inner and outer iris boundaries are \ncorrectly localized. Secondly, the upper and the lower eye-\nlids boundaries are correctly detected. Finally, the accuracy \nrate of the proposed iris localization method is computed \nas follows:\nAs can be seen from Table 2, results with an overall accu-\nracy of 99.82 and 99.87%, obtained with times of 0.65 s and \n0.51 s, were achieved applying the proposed iris localization \nmodel on the CASIA-Iris-V3 and IITD database, respec-\ntively. The proposed model managed to properly detect the \niris region from 1677 out of 1680 eye images in the CASIA-\nIris-V3 Interval database, while 2237 iris images are prop-\nerly detected out of 2240 eye images in the IITD database. \nThe incorrect iris localization results have been taken into \naccount manually to ensure that all the subjects have the \nsame number of images for the subsequent evaluation of the \noverall proposed system.\nAlso, the performance of the proposed model is compared \nagainst other existing approaches. The results obtained dem-\nonstrate that the proposed system outperforms the indicated \nstate-of-the-art of approaches in terms of accuracy in 14 out \n(14)\nAccurcyRate =\nCorrectly Localized Iris Images\nTotal Number\n× 100\nTable 1 The characteristics of \nthe adopted iris image databases Property SDUMLA-HMT CASIA-Iris-V3 IITD\nNumber of classes 106 120 224\nSamples per subject 5 right and 5 left 7 right and 7 left 5 right and 5 left\nNumber of images 1060 images 1680 images 2240 images\nImage size (768 × 576) pixels (320 × 280) pixels (320 × 240) pixels\nImage format BMP JPEG BMP\n Pattern Anal Applic\n1 3\nof 14 cases and in terms of running time in 6 out of 9 cases, \nwhere this information is available.\n5.2 Finding the best CNN\nIn this section, extensive experiments performed to find the \nbest CNN model (called IrisConvNet) for the iris recognition \nsystem, are described. Based on the domain knowledge in \nthe literature, sets of training parameters and CNN configu-\nrations, as illustrated in Fig. 7, were evaluated to study their \nbehavior and to obtain the best CNN. Then, the performance \nof this best system was used later on to make comparisons \nwith current state-of-the-art iris recognition systems.\n5.2.1 Training parameters evaluation\nAs mentioned previously, a set of training parameters is \nneeded in order to study and analyze their influence on the \nperformance of the proposed deep learning approach and to \ndesign a powerful network architecture. All these experi-\nments were conducted on the three different iris databases, \nand the parameters with the best performance (e.g., low-\nest validation error rate and best generalization ability) were \nkept to be used later in finding the best network architecture. \nFor an initial network architecture, the Spoofnet architecture \nas described in [15] was used with only a few changes. The \nreceptive field in the first convolutional layer was set to be \n(3 × 3) pixels rather than (5 × 5) pixels to avoid a rapid \ndecline in the amount of input data, and the output of the \nSoftmax layer was set to N units (the number of classes) \ninstead of 3 units as in the Spoofnet. Finally, the (64 × 64) \ninput image size rather than (128 × 128) was used in these \nexperiments with a zero padding of 1 pixel value applied \nonly to the input layer. The first evaluation was to analyze \nthe influence of the learning rate parameter using the AdaG-\nrad optimization method. Based on the proposed training \nmethodology, an initial learning rate of 10−3 was employed \nas in [62]. However, we observed that the model takes too \nlong to converge because the learning rate was too small and \nit reduced continuously after each epoch according to the \nAdaGrad method. Therefore, for all the remaining experi-\nments, an initial learning rate of 10−2 was used. For the first \ntime, the initial number of epochs was set to 100 epochs \nas in [14]. After that, larger numbers of epochs were also \ninvestigated using the same training methodology, including \n200, 300, 400, 500 and 600 epochs. The CMC curves shown \nTable 2 Comparison of the \nproposed iris localization model \nwith previous approaches\nBold values indicate the highest obtained recognition rates\nApproach CASIA-Iris-V3 IITD\nAccuracy (%) Time (s) Accuracy (%) Time (s)\nJan et al. [52] 99.50 7.75 99.40 8.52\nWang et al. [53] 96.95 165.4 96.07 145.4\nMahmoud and Ali [54] 99.18 – – –\nUhl et al. [55] 74.00 0.21 – –\nUgbaga et al. [56] 98.90 – – –\nUmer et al. [57] 95.87 0.89 98.48 0.77\nWild et al. [58] 98.13 – 97.60 –\nAydi et al. [59] 96.51 9.049 – –\nPawar et al. [60] 96.88 – – –\nMehrotra et al. [61] 99.55 0.396 – –\nProposed iris localization 99.82 0.62 99.87 0.51\nFig. 7 An illustration of the IrisConvNet model for iris recognition\nPattern Anal Applic \n1 3\nFig. 8 CMC curves for epoch number parameter evaluation using three different iris databases: a SDUMLA-HMT, b CASIA-Iris-V3, and c \nIITD\n Pattern Anal Applic\n1 3\nin Fig. 8 are used to visualize the performance of the last \nobtained model on the validation set. It can be seen that as \nlong as the number of epochs is increased, the performance \nof the last model gets better. However, when 600 epochs \nwere evaluated, it was observed that the obtained model \nstarted overfitting the training data and poor results were \nTable 3 Rank-1 identification \nrates obtained for different CNN \narchitectures using the input \nimage size of (64 × 64) pixels. \nEach configuration has either \n3 or 4 layers and indicates the \nnumber of filters in each layer\nBold values indicate the highest obtained recognition rates\nConfiguration SDUMLA-HMT CASIA-Iris-V3 IITD\nR. Iris L. Iris R. Iris L. Iris R. Iris L. Iris\n[6 6 6]C1 46.30 44.71 7.79 0.85 0.44 0.44\n[6 6 20]C2 48.77 44.33 0.83 0.84 0.45 0.46\n[6 20 6]C3 48.96 40.94 76.60 69.46 0.47 0.44\n[6 20 36]C4 46.22 46.41 62.69 60.89 47.76 0.46\n[6 20 36 36]C5 86.50 92.73 87.68 96.79 88.04 86.47\n[6 20 36 64]C6 93.30 96.22 94.64 97.62 84.46 82.45\n[6 20 36 96]C7 97.54 95.94 96.84 98.21 94.82 94.15\n[6 20 36 128]C8 95.66 98.68 96.85 98.57 95.54 96.56\n[6 20 36 150]C9 98.88 97.64 98.04 98.27 95.94 96.74\n[6 20 36 256]C10 98.77 98.08 98.87 99.10 97.00 97.77\n[6 32 36 64]C11 94.15 98.67 98.33 97.02 99.10 99.12\n[6 32 36 96]C12 99.25 99.43 99.52 97.86 99.02 99.50\n[6 32 36 128]C13 99.15 99.71 99.29 99.64 99.33 99.64\n[6 32 36 150]C14 98.68 98.08 99.16 99.11 99.28 98.88\n[6 32 36 256]C15 99.05 98.96 99.70 99.64 99.46 99.50\n[6 32 64 256]C16 99.62 100 99.94 99.88 99.82 99.92\nTable 4 Rank-1 identification \nrates obtained for different \nCNN architectures using the \ninput image size of (128 × 128) \npixels. Each configuration has \neither 4 or 5 layers and indicates \nthe number of filters in each \nlayer\nBold values indicate the highest obtained recognition rates\nConfiguration SDUMLA-HMT CASIA-Iris-V3 IITD\nR. Iris L. Iris R. Iris L.Iris R. Iris L. Iris\n[6 6 16 16]C1 0.97 0.94 45.35 11.78 34.50 15.89\n[6 16 16 16]C2 56.79 56.45 59.46 66.13 40.80 37.86\n[6 16 16 32]C3 57.55 71.51 72.38 72.20 46.38 34.06\n[6 16 32 32]C4 78.77 80.28 55.54 57.97 94.41 94.73\n[6 16 32 64]C5 85.94 64.76 96.13 94.70 97.67 95.93\n[6 16 32 96]C6 92.26 95.18 96.66 97.14 98.48 98.30\n[6 16 32 128]C7 93.58 94.52 98.51 98.21 96.07 98.12\n[6 16 32 256]C8 95.75 95.66 98.15 98.92 98.48 97.36\n[6 32 32 32]C9 32.54 66.13 82.38 94.70 85.17 84.11\n[6 32 32 64]C10 92.07 81.41 92.55 92.73 89.19 93.83\n[6 32 32 96]C11 93.77 92.16 97.32 98.09 96.25 85.71\n[6 32 32 128]C12 94.52 92.35 97.02 98.09 96.25 96.60\n[6 32 32 256]C13 93.49 92.92 96.90 96.93 94.91 93.48\n[6 32 64 256]C14 94.53 93.02 99.17 97.56 97.37 96.25\n[6 16 32 32 64]C15 96.42 80.09 95.23 99.04 98.43 98.17\n[6 16 32 32 96]C16 97.45 93.27 99.28 99.34 98.34 98.83\n[6 16 32 32 128]C17 98.87 96.98 99.34 99.40 99.73 96.92\n[6 16 32 32 256]C18 98.49 97.83 99.22 99.64 97.09 99.28\n[6 16 32 64 64]C19 98.49 91.04 92.92 96.90 99.78 99.64\n[6 16 32 64 96]C20 98.58 98.39 99.64 99.82 99.11 98.75\n[6 16 32 64 128]C21 99.43 99.71 99.16 99.82 99.50 95.76\n[6 16 32 64 256]C22 99.43 99.62 99.88 100 99.41 98.75\n[6 16 64 64 256]C23 97.07 99.39 99.40 99.64 99.91 99.15\nPattern Anal Applic \n1 3\nobtained on the validation set. Therefore, 500 epochs were \ntaken as the initial number of epochs in our assessment pro-\ncedure for all remaining experiments, since the learning pro-\ncess still achieved good generalization without overfitting.\n5.2.2 Network architecture and input image size evaluation\nThe literature on designing powerful CNN architectures \nshows that this is an open problem and usually approached \nusing previous knowledge of related applications. Gener-\nally, the CNN architecture is related to the size of the input \nimage. A smaller network architecture (a smaller number of \nlayers) is required for a small image size to avoid degrading \nthe quality of the last generated feature vectors by increasing \nthe number of layers, while a deeper network architecture \ncan be employed for input images with a larger size along \nwith a large number of training samples to increase the gen-\neralization ability of the network by learning more distinc-\ntive features from the input samples. In this study, when \nthe training parameters have been determined, the network \nTable 5 The average training time of the proposed system\nDatabase (64 × 64) (128 × 128)\nSDUMLA-HMT 6 h and 30 min 20 h and 33 min\nCASIA-Iris-V3 9 h and 18 min 53 h and 14 min\nIITD 17 h and 33 min 60 h and 46 min\nFig. 9 CMC curves for IrisConvNet for iris identification: a SDUMLA-HMT, b CASIA-Iris-V3, and c IITD\n Pattern Anal Applic\n1 3\narchitecture and input image size were evaluated simulta-\nneously by performing extensive experiments using differ-\nent network configurations. Based on the proposed training \nmethodology, our evaluation strategy starts from a relatively \nsmall network (three layers), and then the performance of \nthe network was observed by adding more layers and filters \nwithin each layer. In this work, the influence of input image \nsize was investigated using image sizes of (64 × 64) pixels \nand (128 × 128) pixels, each with two different network \nconfigurations. For example, the (64 × 64) size was assessed \nusing network topologies with 3 and 4 convolutional lay-\ners, while the (128 × 128) size was assessed using network \ntopologies with 4 and 5 convolutional layers.\nThe results obtained by applying the proposed system \non the three different iris databases with image sizes of \n(64 × 64) pixels and (128 × 128) pixels are presented in \nTables 3 and 4, respectively. As can be seen in these tables, \nthe number of the filters in each layer is tending to increase \nas one moves from the input layer toward the higher layers, \nas has been done in previous work in the literature, to avoid \nmemory issues and control the model capacity. In general, it \nhas been observed that the performance of a CNN improves \nas the number of the employed layers is increased along \nwith the number of the filters per each layer. For instance, \nin Table 3 the recognition rate dramatically increased for all \ndatabases by adding a new layer on the top of the network. \nHowever, adding a new layer on the top of the network and/\nor altering the number of the filters within each layer should \nbe carefully controlled. For instance, in Table 4, it can be \nseen that adding a new layer led to a decrease in the recogni-\ntion rate from 93.02 to 80.09% for the left iris image in the \nSDUMLA-HMT database, and from 99.17 to 95.23% for the \nright iris image in the CASIA-Iris-V3 database. In addition, \nchanging the number of filters within each layer has a sig-\nnificant influence on the performance of the CNN. Examples \nof this are shown in Table 3 (e.g., configuration number 10 \nand 11), and Table 4 (e.g., configuration number 18 and 19) \nwhere altering the number of filters in some layers has led \nto either an increase or a decrease in the recognition rate. \nAs indicated in Fig. 7, we prefer the last CNN configura-\ntion in Table 3 as the adopted CNN architecture for identi-\nfying a person’s identity for several reasons. Firstly, it pro-\nvides the highest identification rate at Rank-1 for both the \nleft and right iris images for all the employed databases with \nless complexity (fewer parameters). Secondly, although this \nmodel has given promising results using an input image of \nsize (128 × 128) pixels, the input image size might be a \nmajor constraint in some applications; hence, the smaller \none is used as the input image size for IrisConvNet. In addi-\ntion, the training time required to train such a configuration \nis less than one day, as shown in Table 5. Finally, a larger \nCNN configuration along with a larger image size drives \nsignificant increases in memory requirements and compu-\ntational complexity. The performance of IrisConvNet for \niris identification for both employed input images sizes, is \nexpressed through the CMC curve, as shown in Fig. 9. In \nthis work, the running time was measured by implement-\ning the proposed approaches using a laboratory in Brad-\nford University consisting of 25 PCs with the Windows 8.1 \noperating system, Intel Xeon E5-1620 CPUs and 16 GB of \nRAM. The system code was written to run in MATLAB \nTable 6 Rank-1 identification \nrate (%) of the proposed system \non iris databases\nDatabase Right iris Left iris Rank Level Fusion methods\nHighest ranking Borda count Logistic \nregres-\nsion\nSDUMLA-HMT 99.62 100 100 100 100\nCASIA-Iris-V3 99.94 99.88 100 100 100\nIITD 99.82 99.92 100 100 100\nTable 7 Comparison of the proposed system with other existing \napproaches using two different iris databases\nDatabase Approach CRR (%) Time (s)\nCASIA-Iris-V3 Ma et al. [63] 99.85 –\nVatsa et al. [64] 97.21 1.82\nKerim and Mohammed [65] 99.40 2\nUmer et al. [57] 100 0.98\nDe Costa and Gonzaga [66] 99.10 –\nNg et al. [67] 98.45 –\nZhang and Guan [68] 99.60 –\nRoy et al. [69] 97.21 0.995\nLi et al. [70] 99.91 –\nBharath et al. [71] 84.17 0.44\nIrisConvNet System 100 0.89\nIITD Umer et al. [57] 99.52 1.11\nBharath et al. [71] 95.93 0.10\nNalla and Chalavadi [72] 86.00 –\nElgamal and Al-Biqami [73] 99.5 –\nMinaee et al. [74] 99.20 –\nDhage et al. [75] 97.81 93.24\nAbhiram et al. [76] 97.12 –\nIrisConvNet system 100 0.81\nPattern Anal Applic \n1 3\nR2015a and later versions. Table 5 shows the overall average \nof the training time of the proposed system, which mainly \ndepends on the input image size, the number of subjects in \neach database, and the CNN architecture. \n5.3 Fusion methods evaluation\nUsed as an iris identification system, each time a query \nsample is presented, the similarity score is computed by \ncomparing it against the templates of N different sub-\njects registered in the database and a vector of N match-\ning scores is produced by the classifier. These matching \nscores are arranged in descending order to form the rank-\ning list of matching identities where a smaller rank num-\nber indicates a better match. Table 6 shows the Rank-1 \nidentification rate (%) for both left and right iris images \nin the SDUMLA-HMT, CASIA-Iris-V3, and IITD data-\nbases, and their fusion rates using the three ranking-level \nfusion methods: highest ranking, Borda count, and logistic \nregression. All three fusion methods produced the same \nlevel of accuracy, as shown in Table 6. The highest rank-\ning method was adopted for comparing the performance \nof the proposed system with that of other existing sys-\ntems, due to its efficiency compared to the Borda count \nmethod in exploiting the strength of each classifier effec-\ntively and breaking the ties between the subjects in the \nfinal ranking list. In addition, it is simpler than the logistic \nregression method, which needs a training phase to find \nthe weight for each individual classifier. The comparison \nof the performance of the proposed system with the other \nexisting methods using CASIA-Iris-V3 and ITD database \nis demonstrated in Table 7. The feature extraction and \nclassification techniques used in these methods along with \ntheir evaluation protocols are shown in Table 8. We have \nassumed that these existing methods shown in Table 7 are \ncustomized for these two iris databases and the best results \nthey obtained are quoted herein. As can be seen from \ninspection of Table 7, the proposed deep learning approach \nhas overall, outperformed all the state-of-the-art feature \nextraction methods, which include Discrete Wavelet Trans-\nform (DWT), Discrete Cosine Transform (DCT), Principal \nComponent Analysis (PCA), Average Local Binary Pat-\ntern (ALBP), etc. In term of the Rank-1 identification rate, \nthe highest results were obtained by the proposed system \nusing these two databases. Although Umer et al. [57] also \nachieved a 100% recognition rate for the CASIA-Iris-V3 \ndatabase, the proposed system achieved a better running \ntime to establish the person’s identity from 120 persons \nfrom the same database instead of 99 persons as in [57]. In \naddition, they obtained inferior results for the IITD data-\nbase in terms of both recognition rate and running time. \n6 Conclusions and future works\nIn this paper, a robust and fast multimodal biometric system \nis proposed to identify the person’s identity by constructing a \ndeep learning based system for both the right and left irises \nof the same person. The proposed system starts by apply-\ning an automatic and real-time iris localization model to \ndetect the iris region using CCHT, which has significantly \nincreased the overall accuracy and reduced the processing \ntime of the subsequent stages in the proposed system. In \naddition, reducing the effects of the appearance of the eyelids \nTable 8 Summary of the compared iris recognition approaches and their evaluation protocols\nApproach Feature extraction Classification Evaluation protocol\nAbhiram et al. [76] Circular sector DCT Euclidean distance 3:2 (training:testing)\nBharath et al. [71] Radon transform + gradient-based isolation Euclidean distance 4:1 (training:testing)\nDe Costa and Gonzaga [66] Dynamic features Euclidean distance Cross-validation\nDhage et al. [75] DWT + DCT Euclidean distance 9:1 (training:testing)\nElgamal and Al-Biqami [73] DWT + PCA KNN –\nKerim et al. [65] Co-occurrence matrix Euclidean distance –\nLi et al. [70] ALBP KNN + SVM 4:1 (training:testing)\nMa et al. [63] Circular symmetric filter Nearest feature line 3:2 (training:testing)\nMinaee et al. [74] Scattering transform Minimum distance Cross-validation\nNg et al. [67] Haar wavelet transform Hamming distance –\nNalla and Chalavadi [72] Log-Gabor wavelet Online Dictionary Learning Cross-validation\nRoy et al. [69] Multi-perturbation Shapley analysis SVM Cross-validation\nUmer et al. [57] TCM with ordered PB SVM + Fusion Leave-one-out\nVatsa et al. [64] Gabor transform and euler numbers Mahalanobis distance Cross-validation\nZhang and Guan [68] Empirical mode decomposition KNN –\nIrisConvNet system Convolutional Neural Network Softmax classifier + fusion Cross-validation\n Pattern Anal Applic\n1 3\nand eyelashes can significantly decrease the iris recognition \nperformance. In this work, an efficient deep learning system \nbased on a combination of the CNN and Softmax classifier \nis proposed and to extract discriminative features from the \niris image without any domain knowledge and then classify \nit into one of N classes. After the identification scores and \nrankings are obtained from both the left and right iris images \nfor each person a multi-biometric system is established by \nintegrating these rankings to make a new ranking list using \none of the ranking-level fusion techniques to formulate the \nfinal decision. Then, the performance of the identification \nsystem is expressed through CMC curve. In this work, we \nproposed a powerful training methodology equipped with \na number of training strategies in order to control overfit-\nting during the learning process and increase the generali-\nzation ability of the neural network. The effectiveness and \nrobustness of the proposed approaches have been tested on \nthree challenging databases: SDUMLA-HMT, CASIA-Iris-\nV3 Interval and IITD iris database. Extensive experiments \nhave been conducted on these databases to evaluate different \nnumbers of training parameters (e.g., learning rate, number \nof layers, number of filters per each layer) in order to build \nthe best CNN as the framework for the proposed iris identi-\nfication system. The experimental results demonstrated the \nsuperiority of the proposed system over recently reported \niris recognition systems with a Rank-1 identification rate \nof 100% on all the three databases and less than one second \nrequired to establish the person’s identity. Clearly, further \nresearch will be required to validate the efficiency of the \nproposed approaches using larger databases with more dif-\nficult and challenging iris images. In addition, exploring the \npotential of using the proposed deep learning approaches \non the top of pre-precessed iris images using some of well-\nknown features extraction methods such as LBP and Curve-\nlet transform. We might be able to guide the proposed deep \nlearning approaches to explore more discriminating features \notherwise not possible using the raw data.\nOpen Access This article is distributed under the terms of the Creative \nCommons Attribution 4.0 International License (http://creativecom-\nmons.org/licenses/by/4.0/), which permits unrestricted use, distribu-\ntion, and reproduction in any medium, provided you give appropriate \ncredit to the original author(s) and the source, provide a link to the \nCreative Commons license, and indicate if changes were made. \nReferences\n 1. Hajari K (2015) Improving iris recognition performance using \nlocal binary pattern and combined RBFNN. Int J Eng Adv Tech-\nnol 4(4):108–112\n 2. Al-Waisy AS, Qahwaji R, Ipson S, Al-Fahdawi S (2015) A robust \nface recognition system based on curvelet and fractal dimension \ntransforms. In: 2015 IEEE international conference computet \ninformation technology ubiquitous computing communication \ndependable, autonomic secure computing, pervasive intelligence \ncomputing. pp. 548–555\n 3. Tan T, Sun Z (2009) Ordinal measures for iris recognition. IEEE \nTrans Pattern Anal Mach Intell 31(12):2211–2226\n 4. Abiyev RH, Kilic KI (2011) Robust feature extraction and iris \nrecognition for biometric personal identification. Biometric Syst \nDes Appl, InTech\n 5. Hentati R, Hentati M, Abid M (2012) Development a new algo-\nrithm for iris biometric recognition. Int J Comput Commun Eng \n1(3):283–286\n 6. Das A, Parekh R (2012) Iris recognition using a scalar based tem-\nplate in eigen-space. Int J Comput Sci Telecommun 3(5):3–8\n 7. AlMahafzah H, Zaid AlRwashdeh M (2012) A survey of multi-\nbiometric systems. Int J Comput Appl 43(15):36–43\n 8. Gad R, EL-SAYED A, Zorkany M, El-fishawy N (2015) Multi-\nbiometric systems: a state of the art survey and research direc-\ntions. Int J Adv Comput Sci Appl 6(6):128–138\n 9. Ross A, Nandakumar K, Anil JK (2006) Handbook of multibiom-\netrics. J Chem Inf Model 53(9):1689–1699\n 10. Fernandez FA (2008) Biometric sample quality and its application \nto multimodal authentication systems. PhD Thesis, Universidad \nPolit´ ecnica de Madrid (UPM)\n 11. Deng L, Yu D (2013) Deep learning methods and applications. \nSignal Process 28(3):198–387\n 12. Pellegrini T (2015) Comparing SVM, Softmax, and shallow \nneural networks for eating condition classification. In: Sixteenth \nannual conference of the international speech communication \nassociation. pp 899–903\n 13. Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdi-\nnov R (2014) Dropout: a simple way to prevent neural networks \nfrom overfitting. J Mach Learn Res 15:1929–1958\n 14. Krizhevsky A, Sulskever I, Hinton GE (2012) ImageNet classifi-\ncation with deep convolutional neural networks. In: Advances in \nneural information processing system. pp 1–9\n 15. Menotti D, Chiachia G, Pinto A, Schwartz WR, Pedrini H, Fal-\ncão AX, Rocha A (2015) Deep representations for iris, face, and \nfingerprint spoofing detection. IEEE Trans Inf Forensics Secur \n10(4):864–879\n 16. Silva P, Luz E, Baeta R, Pedrini H, Falcao AX, Menotti D (2015) \nAn Approach to iris contact lens detection based on deep image \nrepresentations. In: 2015 28th SIBGRAPI conference on graphics, \npatterns images. pp 157–164\n 17. Daugman JG (1993) High confidence visual recognition of per-\nsons by a test of statistical independence. IEEE Trans Pattern Anal \nMach Intell 15(11):1148–1161\n 18. Ren X, Peng Z, Zeng Q, Peng C, Zhang J, Wu S, Zeng Y (2008) \nAn improved method for Daugman’s iris localization algorithm. \nComput Biol Med 38(1):111–115\n 19. Proenc H, Alexandre LA (2006) Iris segmentation methodology \nfor non-cooperative recognition. IEE Proce Vis Image Signal Pro-\ncess 153(2):199–205\n 20. Sahmoud SA, Abuhaiba IS (2013) Efficient iris segmenta-\ntion method in unconstrained environments. Pattern Recognit \n46(12):3174–3185\n 21. Wildes RP (1997) Iris recognition: an emerging biometric technol-\nogy. Proc IEEE 85(9):1348–1363\n 22. Boles WW, Boashash B (1998) A human identification technique \nusing images of the iris and wavelet transform. IEEE Trans Signal \nProcess 46(4):1185–1188\n 23. Lim S, Lee K, Byeon O, Kim T (2001) Efficient iris recogni-\ntion through improvement of feature vector and classifier. ETRI J \n23(2):61–70\n 24. Masek L (2003) Recognition of human iris patterns for biomet-\nric identification. The School of Computer Science and Software \nEng- ineering, The University of Western Australia, Crawley\nPattern Anal Applic \n1 3\n 25. Ding S, Zhu H, Jia W, Su C (2011) A survey on feature extraction \nfor pattern recognition. Artif Intell Rev 37(3):169–180\n 26. Jihua Y, Dan H, Guomiao X, Yahui C (2013) An advanced BPNN \nface recognition based on curvelet transform and 2DPCA. In: 8th \ninternational conference computer science education (ICCSE). pp \n1019–1022\n 27. Khalajzadeh H, Mansouri M, Teshnehlab M (2014) Face recogni-\ntion using convolutional neural network and simple logistic clas-\nsifier. In: Soft computing in industrial application, Springer. pp \n197–207\n 28. Zeng M, Nguyen LT, Yu B, Mengshoel OJ, Zhu J, Wu P, Zhang J \n(2014) Convolutional neural networks for human activity recogni-\ntion using mobile sensors. In: 2014 6th international conference \nmobile computing, applications and services (MobiCASE). pp \n197–205\n 29. Syafeeza AR, Liew SS, Bakhteri R (2014) Convolutional neural \nnetwork for face recognition with pose and illumination variation. \nInt J Eng Technol 6(1):44–57\n 30. Collobert R, Weston J (2008) A unified architecture for natural \nlanguage processing: deep neural networks with multitask learn-\ning. In: Proceedings on 25th international conference on machine \nlearning. pp 160–167\n 31. Hafemann LG, Oliveira LS, Cavalin PR, Sabourin R (2015) Trans-\nfer learning between texture classification tasks using convolu-\ntional neural networks. In: International joint conference neural \nnetworks. pp 1–7\n 32. El Khiyari H, Wechsler H (2016) Face recognition across \ntime lapse using convolutional neural networks. J Inf Secur \n7(3):141–151\n 33. Dahl GE (2015) Deep learning approaches to problems in speech \nrecognition, computational chemistry, and natural language text \nprocessing. PhD Thesis, Department of Computer Science, Uni-\nversity of Toronto, p 101\n 34. Salakhutdinov R, Hinton G (2009) Semantic hashing. Int J Approx \nReason 50(7):969–978\n 35. Ciresan DC, Meier U, Masci J, Schmidhuber J (2012) Multi-\ncolumn deep neural network for traffic sign classification. Neural \nNetw 32:333–338\n 36. Abibullaev B, An J, Jin SH, Lee SH, Il Moon J (2013) Deep \nmachine learning—a new frontier in artificial intelligence \nresearch. Med Eng Phys 35(12):1811–1818\n 37. Bengio Y (2009) Learning deep architectures for AI”, Found. \n Trends®. Mach Learn 2(1):1–127\n 38. Zeng R, Wu J, Shao Z, Senhadji L, Shu H, Zeng R, Wu J, Shao Z, \nSenhadji L, Shu H (2015) Quaternion softmax classifier. Electron \nLett IET 50(25):1929–1930\n 39. Al-Waisy AS, Qahwaji R, Ipson S, Al-Fahdawi S (2015) A fast \nand accurate iris localization technique for healthcare security \nsystem. In: 2015 IEEE international conference computer and \ninformation technology; ubiquitous computing and communica-\ntions; dependable, autonomic and secure computing; pervasive \nintelligence and computing. pp 1028–1034\n 40. Duda R, Hart P, Stork D (2012) Patterns classification. Wiley, \nNew York\n 41. Sun Y, Wang X, Tang X (2014) Deep learning face representation \nfrom predicting 10,000 classes. In: Proceedings of the IEEE con-\nference on computer vision and pattern recognition. pp 1891–1898\n 42. Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods \nfor online learning and stochastic optimization. J Mach Learn Res \n12:2121–2159\n 43. Fergus R, Zeiler M (2015) Visualizing and understanding convo-\nlutional networks. In: European conference on computer vision. \nSpringer, pp 818–833\n 44. Oquab M, Oquab M, Laptev I, Learning JS, Mid-level T, Oquab \nM, Bottou L (2014) Learning and transferring mid-level image \nrepresentations using convolutional neural networks. In: IEEE \nconference on computer vision and pattern recognition\n 45. Al-Waisy AS, Qahwaji R, Ipson S, Al-Fahdawi S (2017) A mul-\ntimodal deep learning framework using local feature represen-\ntations for face recognition. Mach Vis Appl 1–20. doi:10.1007/\ns00138-017-0870-2\n 46. Turchenko V, Luczak A (2015) Creation of a deep convolutional \nauto-encoder in caffe. arXiv Prepr. arXiv1512.01596\n 47. Abaza A, Ross A (2009) Quality based rank-level fusion in \nmultibiometric systems. In: Proceedings on 3rd IEEE interna-\ntional conference on biometrics: theory, applications, and sys-\ntems. pp 1–6\n 48. Monwar MM, Gavrilova M (2013) Markov chain model for \nmultimodal biometric rank fusion. Signal Image Video Process \n7(1):137–149\n 49. Yin Y, Liu L, Sun X (2011) SDUMLA-HMT: a multimodal bio-\nmetric database. In: Chinese conference on biometric recognition. \nSpringer, Berlin Heidelberg, pp 260–268\n 50. Chinese Academy of Science—Institute of Automation, CASIA \nIris Image Database Version 3.0 (CASIA-IrisV3). http://biomet-\nrics.idealtest.org/dbDetailForUser.do?id=3\n 51. Kumar A, Passi A (2010) Comparison and combination of iris \nmatchers for reliable personal authentication. Pattern Recognit \n43(3):1016–1026\n 52. Jan F, Usman I, Agha S (2012) Iris localization in frontal eye \nimages for less constrained iris recognition systems. Digit Signal \nProcess A Rev J 22(6):971–986\n 53. Wang K, Qian Y (2011) Fast and accurate iris segmentation based \non linear basis function and RANSAC. In: Proceedings interna-\ntional conference on image processing ICIP, vol 2. pp 3205–3208\n 54. Mahlouji M, Noruzi A (2012) Human iris segmentation for iris \nrecognition in unconstrained environments. Int J Comput Sci \nIssues 9(1):149–155\n 55. Uhl A, Wild P (2012) Weighted adaptive Hough and ellipsopolar \ntransforms for real-time iris segmentation. In: Proceeding—2012 \n5th IAPR international conference on biometrics, ICB 2012. pp \n283–290\n 56. Nkole IU, Bin Sulong G (2012) An enhanced iris segmentation \nalgorithm using circle Hough transform. In: Proceedings of the \nIEEE international conference on Digital Signal and Image pro-\ncessing, pp 1–7\n 57. Umer S, Dhara BC, Chanda B (2016) Texture code matrix-based \nmulti-instance iris recognition. Pattern Anal Appl 19(1):283–295\n 58. Version A (2015) Segmentation-level fusion for iris recognition. \nIn: 14th international conference of the biometrics special interest \ngroup (BIOSIG 2015), pp 1–6\n 59. Aydi W (2011) Improved Masek approach for iris localization. In: \nICM 2011 proceeding. IEEE, pp 1–5\n 60. Pawar MK, Student PG, Jhajjav G (2012) Iris segmentation using \ngeodesic active contour for improved texture extraction in recogni-\ntion. Int J 47(16):40–47\n 61. Mehrotra H, Sa PK, Majhi B (2013) Fast segmentation and adap-\ntive SURF descriptor for iris recognition. Math Comput Model \n58(1–2):132–146\n 62. Chowdhury AR, Lin T-Y, Maji S, Learned-Miller E (2016) One-\nto-many face recognition with bilinear CNNs. In: IEEE winter \nconference on applications of computer vision. pp 1–9\n 63. Ma L, Wang Y, Tan T (2002) Iris recognition using circular sym-\nmetric filters. In: 16th international conference on pattern recogni-\ntion. pp 414–417\n 64. Vatsa M, Singh R, Noore A (2008) Improving iris recognition \nperformance using segmentation, quality enhancement, match \nscore fusion, and indexing. IEEE Trans Syst Man Cybern Part B \nCybern 38(4):1021–1035\n Pattern Anal Applic\n1 3\n 65. Kerim AA, Mohammed SJ (2014) New iris feature extraction and \npattern matching based on statistical measurement. Int J Emerg \nTrends Technol Comput Sci 3(5):226–231\n 66. Da Costa RM, Gonzaga A (2012) Dynamic features for iris recog-\nnition. IEEE Trans Syst Man Cybern B Cybern 42(4):1072–1082\n 67. Ng TW, Tay TL, Khor SW (2010) Iris recognition using rapid Haar \nwavelet decomposition. In: 2010 2nd international conference on \nignal processing systems (ICSPS), vol 1. pp V1-820–V1-823\n 68. Zhang H, Guan X (2012) Iris recognition based on grouping KNN \nand rectangle conversion. In: ICSESS 2012—Proceedings of 2012 \nIEEE 3rd international conference on software engineering and \nservice science. pp 131–134\n 69. Roy K, Bhattacharya P, Suen CY (2011) Iris recognition using \nshape-guided approach and game theory. Pattern Anal Appl \n14(4):329–348\n 70. Li C, Zhou W, Yuan S (2015) Iris recognition based on a novel \nvariation of local binary pattern. Vis. Comput. 31(10):1419–1429\n 71. Bharath BV, Vilas AS, Manikantan K, Ramachandran S (2014) \nIris recognition using radon transform thresholding based fea-\nture extraction with gradient-based isolation as a pre-processing \ntechnique. In: 9th international conference on industrial and infor-\nmation systems (ICIIS). IEEE, 2014, pp 1–8\n 72. Nalla PR, Chalavadi KM (2015) Iris classification based on sparse \nrepresentations using on-line dictionary learning for large-scale \nde-duplication applications. Springerplus 4(1):1–10\n 73. Elgamal M, Al-Biqami N (2013) An efficient feature extraction \nmethod for iris recognition based on wavelet transformation. Int \nJ Comput Inf Technol 2(3):521–527\n 74. Minaee S, Abdolrashidi A, Wang Y (2015) Iris recognition using \nscattering transform and textural features. In: Signal processing \nand signal processing education workshop (SP/SPE). IEEE, pp \n37–42\n 75. Dhage SS, Hegde SS, Manikantan K, Ramachandran S (2015) \nDWT-based feature extraction and radon transform based contrast \nenhancement for improved iris recognition. Procedia Comput Sci \n45(2015):256–265\n 76. Abhiram MH, Sadhu C, Manikantan K, Ramachandran S (2012) \nNovel DCT based feature extraction for enhanced iris recognition. \nIn: Proceedings—2012 international conference on communica-\ntion, information and computing technology ICCICT 2012. pp 1–6\n", "fullTextIdentifier": "https://bradscholars.brad.ac.uk/bitstream/handle/10454/15682/Qahwaji_et_al_PAA.pdf?sequence=2&isAllowed=y", "identifiers": [], "issn": "1433-7541", "journals": [], "language": null, "magId": "2766373199", "oai": "oai:bradscholars.brad.ac.uk:10454/15682", "pdfHashValue": "01698a5e8b5394e31d93f2b1202841c6c79151a7", "publisher": "'Springer Science and Business Media LLC'", "relations": [ "https://doi.org/10.1007/s10044-017-0656-1" ], "subjects": [ "Article", "Published version" ], "title": "A multi-biometric iris recognition system based on a deep learning approach", "topics": [ "Iris recognition; Multimodal biometric systems; Deep learning; Convolutional Neural Network; Softmax classifier; AdaGrad method" ], "urls": [ "http://hdl.handle.net/10454/15682" ], "year": 2017 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000003
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "NoIron Age studies in northern Britain have been dominated by one monument form, the broch. This focus on these monumental towers of the Atlantic Scotland, perhaps at the expense of other archaeological evidence, has brought about a strong division in the archaeological community. MacKie and Armit have both recently summarized the development of broch studies detailing the opposing arguments for the date of construction. In recent years archaeological evidence for these monuments has indicated an indigenous development rather than being associated with the movement of Iron Age peoples. This paper presents new chronological data for the construction of a Shetland broch and examines the archaeological repercussions for the 'early' chronology provided by these dates. Excavations at Old Scatness in the South Mainland of Shetland have revealed new evidence for a broch and defended Iron Age Village", "authors": [ "Dockrill, Stephen J.", "Batt, Catherine M.", "Outram, Zoe" ], "contributors": [], "coreId": "135533", "datePublished": "2006-01-01T00:00:00", "doi": null, "downloadUrl": null, "enrichments": { "citationCount": null, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": null, "fullTextIdentifier": null, "identifiers": [], "issn": null, "journals": [], "language": null, "magId": null, "oai": "oai:bradscholars.brad.ac.uk:10454/2774", "pdfHashValue": null, "publisher": null, "relations": [ "http://cat.inist.fr/?aModele=afficheN&cpsidt=20239705" ], "subjects": [ "Article", "No full-text available in the repository" ], "title": "Time and Place: A new chronology for the origin of the broch based on the scientific dating programme at the Old Scatness Broch, Shetland.", "topics": [ "Geology and climatology", "Settlement", "Village", "Excavation", "People", "Motion", "Construction", "Tower", "Study", "Dating", "Monument", "Chronology", "Scotland", "Iron Age" ], "urls": [ "http://hdl.handle.net/10454/2774" ], "year": 2006 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000004
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "YesFifth generation (5G) cellular networks will be comprised of millions of connected devices like wearable devices, Androids, iPhones, tablets and the Internet of Things (IoT) with a plethora of\r\napplications generating requests to the network. The 5G cellular networks need to cope with such\r\nsky-rocketing tra c requests from these devices to avoid network congestion. As such, cloud radio\r\naccess networks (C-RAN) has been considered as a paradigm shift for 5G in which requests from\r\nmobile devices are processed in the cloud with shared baseband processing. Despite call admission\r\ncontrol (CAC) being one of radio resource management techniques to avoid the network\r\ncongestion, it has recently been overlooked by the community. The CAC technique in 5G C-RAN has\r\na direct impact on the quality of service (QoS) for individual connections and overall system\r\ne ciency. In this paper, a novel Fuzzy-Logic based CAC scheme with pre-emption in C-RAN is proposed. In this scheme, cloud bursting technique is proposed to be used during congestion, where\r\nsome delay tolerant low-priority connections are pre-empted and outsourced to a public cloud with\r\na penalty charge. Simulation results show that the proposed scheme has low blocking probability\r\nbelow 5%, high throughput, low energy consumption and up to 95% of return on revenue", "authors": [ "Sigwele, Tshiamo", "Pillai, Prashant", "Alam, Atm S.", "Hu, Yim Fun" ], "contributors": [], "coreId": "153515315", "datePublished": "2017-08-31T00:00:00", "doi": "10.1186/s13638-017-0944-x", "downloadUrl": "https://core.ac.uk/download/153515315.pdf", "enrichments": { "citationCount": null, "documentType": { "confidence": 1, "type": "research" }, "references": [] }, "fullText": "Sigwele et al.\nRESEARCH\nFuzzy-Logic Based Call Admission Control in 5G\nCloud Radio Access Networks with Pre-emption\nTshiamo Sigwele1*†, Prashant Pillai2, Atm S Alam3 and Yim F Hu1\nAbstract\nFifth generation (5G) cellular networks will be\ncomprised of millions of connected devices like\nwearable devices, Androids, iPhones, tablets and\nthe Internet of Things (IoT) with a plethora of\napplications generating requests to the network.\nThe 5G cellular networks need to cope with such\nsky-rocketing traffic requests from these devices to\navoid network congestion. As such, cloud radio\naccess networks (C-RAN) has been considered as a\nparadigm shift for 5G in which requests from\nmobile devices are processed in the cloud with\nshared baseband processing. Despite call admission\ncontrol (CAC) being one of radio resource\nmanagement techniques to avoid the network\ncongestion, it has recently been overlooked by the\ncommunity. The CAC technique in 5G C-RAN has\na direct impact on the quality of service (QoS) for\nindividual connections and overall system\nefficiency. In this paper, a novel Fuzzy-Logic based\nCAC scheme with pre-emption in C-RAN is\nproposed. In this scheme, cloud bursting technique\nis proposed to be used during congestion, where\nsome delay tolerant low-priority connections are\npre-empted and outsourced to a public cloud with\na penalty charge. Simulation results show that the\nproposed scheme has low blocking probability\nbelow 5%, high throughput, low energy\nconsumption and up to 95% of return on revenue.\nKeywords: Call Admission Control (CAC); Cloud\nRadio Access Network (C-RAN); Pre-emption;\nFuzzy-Logic; 5G\n*Correspondence: t.sigwele@bradford.ac.uk\n1Faculty of Engineering and Informatics, University of Bradford, BD7 1DP\nBradford, UK\nFull list of author information is available at the end of the article\n†Equal contributor\n1 Introduction\nIn recent years, a large number of mobile devices\nand multimedia services in recent years has resulted\nin gigantic demands for larger system capacities\nand higher data rates over large coverage areas in\nhigh mobility environments. As a result, radio access\nnetworks (RAN) have tremendously grown so complex\nand are becoming so difficult to manage and control.\nMaintaining quality of service (QoS) for real-time (RT)\nand non-real time (NRT) services while optimizing\nresource utilization is a major challenge for next\ngeneration systems like fifth generation (5G). The\n5G cellular networks will be comprised of millions\nof devices like wearable devices, Androids, iPhones,\ntablets and Internet of Things (IoT) connected to\nthe network with a plethora of applications. The\n5G cellular networks will need to cope with the\nexplosive increase of traffic requests from this devices\nto avoid network overload and traffic congestion in\nthe core network. The 5G will comprise of cloud\nbased architecture called cloud RAN (C-RAN) which\nwas introduced as a way of solving the drawbacks\nof conventional RAN by pooling BS resources to a\ncentralized cloud. Virtualization concept is used on\ngeneral purpose processors (GPPs) to dynamically\nallocate BS processing resources to different virtual\nbaseband units (vBBU) in the BBU pool.\nCall admission control (CAC) is a scheme that offers\nan effective way of avoiding network congestion and\ncan play a key role in the provision of guaranteed QoS\nand avoid traffic congestion in 5G. The basic function\nof a CAC algorithm is to accurately decide whether a\nconnection can be accepted into a resource constrained\nnetwork without violating the service commitments\nmade to the already admitted connections. On the\nother hand, However, traditional CAC schemes are not\nsuitable for 5G C-RAN while an efficient CAC scheme\naims to optimize call blocking probability (CBP), call\ndropping probability (CDP) and system utilization.\nThere are many reasons as to why conventional CAC\nschemes are not suitable for 5G. First, conventional\nCAC approaches in cellular networks suffer uncertainties\ndue to real-time processing of radio signals and the\ntime varying nature of parameters such as speed,\nSigwele et al. Page 2 of 7\nlocation, direction, channel conditions, available power,\netc. Many of these traditional CAC schemes are\nineffective leading to incorrect request admission\nwhen the network is actually incapable of servicing\nthe request or incorrect rejection when there are\nactually enough resources to service the request.\nSome of these CAC schemes tend to assume network\nstate information is static [1]. However, in practice\nthe network is dynamic and values measured keep\nchanging. Second, as stated in our previous work [1],\ntraditional CAC schemes are based on stand-alone\nRAN base station (BS) architectures while 5G will\nbe based on centralized cloud BSs. These BSs are\npreconfigured for peak loads and have unshared\nprocessing and computation resources located in the\nBS cell areas. These BS resources cannot be shared\nto address varied traffic needs on other cell areas,\ncausing poor resource utilization, high CBP and CDP.\nAs such there is a need for efficient CAC schemes\nsuitable for 5G. Intelligent CAC schemes based on\nintelligent decision making techniques like fuzzy-logic\nare a promising solution and solve the problem of\nimprecision and uncertainties in cellular networks [2].\nThe schemes mimic the cognitive behaviour of human\nmind without the need for complex mathematical\nmodelling making them adaptive, less complex, flexible\nand suitable to cope with the rapidly changing network\nconditions of cellular networks in 5G.\nThis paper presents a fuzzy-logic based CAC scheme\nusing pre-emption in 5G C-RAN. During congestion,\nsome delay tolerant NRT low priority connections are\npre-empted and outsourced to a public cloud with a\npricing penalty to accommodate the RT connections,\na technique called cloud bursting [3]. This work is\nthe continuation of our published works in [1] and\n[2] where the former proposed CAC in 5G C-RAN\nwithout fuzzy-logic while the latter proposed CAC in\n5GC-RAN using fuzzy-logic without pre-emption. The\nwork in this paper will add pre-emption and cloud\nbursting technique. Below are the contributions of this\npaper:\ni) A CAC scheme based on fuzzy-logic with pre-\nemption in 5G C-RAN is proposed. The fuzzy-logic\navoids uncertainties caused by traditional CAC\nschemes in distributed RAN systems.\nii) A cloud bursting technique is proposed where\nduring congestion, low priority delay tolerant\nNRT connections are pre-empted and outsourced\nto a public cloud at a certain price penalty to\naccommodate the RT connections. It is assumed\nthat the public cloud is of infinite processing\ncapacity as such it cannot get congested, as such\nit will not be captured in the simulation.\niii) A rigorous simulation study is conducted for\nvalidating the proposed scheme, which shows a\nsignificant performance improvement.\nCAC with pre-emption technique have been previously\nstudies in the past, but in this work, pre-emption\nin CAC have been implemented using fuzzy logic\ntechnique which significantly improves blocking probability,\nalso, CAC on its own have not been studied in C-RAN\nand this is the first work to study CAC in C-RAN.\nAlso, cloud bursting have not been implemented in\nCAC before and is introduced in our work.\nThe rest of this paper is organized as follows. Section\nII presents the related works on CAC schemes. The\nproposed fuzzy-logic CAC scheme in 5G C-RAN is\npresented in Section III. Section IV presents the\nsimulation model and the obtained performance results.\nFinally, conclusion and further works are presented in\nSection V.\n2 Related Work\nThere are many ways of categorizing CAC schemes\nsuch as parameter based, measurement based, utility\nbased, centralized/distributed, static/dynamic etc.\nComprehensive surveys can be found here [4, 5,\n6, 7]. This paper concentrate on intelligent CAC\nschemes which are based on intelligent decision making\ntechniques for solving the problem of error and\nuncertainties in conventional CAC schemes [8]. They\nare adaptive and flexible, thus making them suitable to\ncope with the rapidly changing network conditions and\nbursty traffic that can occur in 5G networks to give an\nefficient network management scheme. A fuzzy-logic\nCAC scheme for stand alone BSs for high-speed\nnetworks was proposed in [9]. Even though the author\nused fuzzy-logic to better estimate equivalent capacity,\nhe does not show how the schemes performs in terms\nof CBP. In [10], the author proposed a fuzzy-logic\nCAC approach scheme for long term evolution (LTE).\nEven though the proposed scheme shows better call\nrejection than the quality index based approach, the\nCAC scheme is based on standalone BS architecture\nwith low BS utilisation not suitable for 5G. A method\nof fuzzy admission control for multimedia applications\nscheme is proposed in [11]. In this method, for\nmultimedia applications, two fuzzy controllers have\nbeen introduced allowing better estimation of QoS.\nThe drawbacks of this scheme is that it has many\nfuzzy controllers that can magnify CAC complexity\nand computation latency. In [12], a CAC scheme\nusing genetic algorithm (GA) has been proposed\nfor roaming mobile users with low handoff latency\nin next generation wireless systems. The scheme\nprovides high network utilization, minimum cost but\nit is not suitable for real time applications since\nSigwele et al. Page 3 of 7\nGA is very slow and cannot be used for real time\ndecision making. A neural network approach for\nCAC with QoS guarantee in multimedia high-speed\nnetworks is proposed in [13]. It is an integrated\nmethod that combines linguistic control capabilities\nand the learning abilities of a neural network. Even\nthough the scheme provides higher system utilization,\nit requires large computational resources working in\nparallel. A novel learning approach to solve the CAC\nin multimedia cellular networks with multiple classes\nof traffic is presented in [14]. The near optimal CAC\npolicy is obtained through a form of Neuro-Evolution\nalgorithm. This method guarantees that the specified\nCDP remain under a predefined upper bound while\nretaining acceptable CBP. This scheme is black box\nlearning approach since the knowledge of its internal\nworking of the scheme is never known.\n3 Proposed CAC Scheme\n3.1 C-RAN Architecture\nC-RAN is a paradigm shift for next-generation RANs\nlike 5G. C-RAN is described using four C’s which\nstand for; clean, centralized processing, collaborative\nradio and real-time cloud computing [1]. The C-RAN\narchitecture adopted in this paper is shown in Fig.\n1. The C-RAN concept separates the radio and\nantenna parts from the digital baseband parts and\npools multiple baseband units (BBUs) in a central\noffice called the BBU pool. These digital only BSs,\ncalled vBBUs, are linked via high bandwidth, low\nlatency fiber to remote radio heads (RRHs). GPPs\nlike X86 and ARM processors are used to house\nthe BBUs and using cloud computing virtualization\nconcept, multiple vBBU virtual machines (VMs)\nare dynamically provisioned in accordance to traffic\ndemands.\n3.2 Problem Formulation\nThe main problem is that next-generation cellular\nnetworks like 5G C-RAN will have to process many\nrequests from billions of devices, as such there will be\ntraffic congestion in this 5G network. The questions\nto be answered is; How can efficient CAC schemes\nbe deviced and then incorporated in 5G C-RAN to\nimprove CBP and resource utilization while maintaining\nthe required QoS.\n3.3 Fuzzy-Logic based CAC Scheme\nIn this paper, fuzzy-logic scheme is used for performing\nCAC in 5G C-RAN because of its simplicity and\nrobustness [6]. Fuzzy-logic techniques resembles the\nhuman decision making with an ability to generate\nprecise solutions from certain or approximate information.\nFuzzy-logic avoids uncertainties and computational\ncomplexities brought by many CAC schemes and does\nnot require precise inputs, and can process any number\nof inputs. Fuzzy-logic incorporates a simple, rule based\napproach based on natural language to solve control\nproblem rather than attempting to model a system\nmathematically. In the proposed scheme, baseband\nsignals from multiple cells are no longer processed\non their stand-alone BBUs but processed on GPPs\nin the cloud using the concept of cloud computing.\nThe GPPs are software defined enabling multiple\nradio signal from different cells to be processed in\none computer platform. This is made possible through\nvirtualization technology where hardware components\nare abstracted from software components. The vBBUs\nare dynamically provisioned to service traffic requests\nfrom cells. The vBBU performs baseband signal\nprocessing of specific cell traffic. The traffic demand\nfrom cells is mapped into baseband processing resource\nsuch that every RRH traffic is serviced by its own\nvBBU.\nFig. 2 shows the proposed fuzzy CAC system model\ndiagram for 5G C-RAN which is located in the BBU\npool inside the cloud controller. The model consists of\nvarious modules comprising of the operator’s C-RAN\ninfrastructure for normal processing of requests when\nthe congestion is low and a third party public C-RAN\ninfrastructure for handling requests for the operator’s\nC-RAN during congestion. Connection requests that\nare processed in the public infrastructure are charged\na certain price by the charging manager depending\non the type of service and the size of the connection\nrequest. The resource estimator estimates the available\ncapacity in the operator’s C-RAN infrastructure and\nindicate whether the cloud is congested or not. The\nmodel also comprise of the fuzzy controller which\nperforms the CAC decisions for incoming requests\nfrom users. The fuzzy controller takes as inputs three\nvariables which are effective capacity, Ec, in Kbps,\nservice type St and normalized available capacity, Ac\nand the output is the admittance decision, Ad. The\nadmittance decision is either, accept a request, reject\na request or pre-empt some low priority requests and\noutsource them to a public cloud. The traffic requests\nare divided into two groups, namely RT and NRT\ntraffic as shown below.\n• RT classes - These are called guaranteed bit rate\n(GBR) which include VoIP, live streaming, video\ncall and real time gaming. This type of services\nare delay sensitive.\n• NRT classes - This are called non-GBR which\ninclude buffered streaming, and transmission\ncontrol protocol (TCP) based services like web\nbrowsing, email, file transfer protocol (ftp), and\npoint to point (p2p). These types of services are\ndelay tolerant.\nSigwele et al. Page 4 of 7\n3.3.1 Cloud Bursting Technique for Pre-empted\nConnections\nThe cloud bursting technique allows the operators\nto dynamically extend their infrastructure by renting\nthird-party resources [15]. During congestion of the\noperator’s C-RAN infrastructure, when a high priority\nRT connections arrives as illustrated in Fig. 3, and\nthe cloud is congested, two things happen, either\nthe low priority NRT connections are pre-empted\nfrom the operator’s C-RAN and then bursted into\nthe public C-RAN infrastructure to accommodate the\nhigh priority RT connections or the RT connection is\ndropped if there are no NRT connections to pre-empt\nin the operator’s C-RAN. RT connections are never\noutsourced to the public cloud because they are delay\nsensitive. Only NRT connections are outsourced to\nthe public cloud. An agreement is made between\nthe operator and the public cloud operator and a\ncertain price is charged for outsourcing some NRT\nconnections. When a NRT connection arrives and the\noperator’s cloud is congested, the NRT connection is\nforwarded to the public cloud as shown in Fig. 3 with a\ncertain price penalty where the request will be charged\nby the charging manager.\n3.3.2 Structure of Fuzzy-Logic Controller\nThe fuzzy controller of the proposed scheme takes\nthree inputs: (i) effective capacity, Ec; (ii) available\ncapacity, Ac; and (iii) network congestion factor, Nc\nand output the admittance decision, Ad. Below is the\ndescription of the structure of the proposed fuzzy-logic\ncontroller.\nMembership Functions: Trapezoidal and triangular\nmembership functions are chosen for simplicity. The\nmembership functions for input and output linguistic\nparameters are shown in Fig. 4. The values of the\nmembership functions have been chosen based on\ncommonly used values of membership functions in\nvarious literature. For the fuzzy controller, the term\nsets for Ec, St, Ac, Nc and Ad are defined as follows:\ni) T (Ec) = {Low,Medium,High}\nii) T (St) = {NRT,RT}\niii) T (Ac) = {NotEnough,Enough}\niv) T (Ad) = {Accept, Reject, Preempt}\nFuzzy Rule Base: The fuzzy rule base consists of a\nseries of fuzzy rules, shown in Table 1. These control\nrules are of the following form: IF ’condition’, THEN\n’action’. Example: if St is ’RT’ and ’St’ is ’Not Enough’\nand ’Ec’ is ’High’ then ’Reject’.\nDeffuzification Method: The Center of Gravity (COG)\n[1] method is used for defuzzification to convert the\ndegrees of membership of output linguistic variables\ninto crisp/numerical values. The COG method is\nadopted since the membership functions used are\nsimple triangular and trapezoidal shapes with low\ncomputational complexity and can be expressed as [1]:\nZCOG =\n∫\nz\nµ(z)zdz∫\nz\nµ(z)dz\n(1)\n3.3.3 Queueing System for pre-emted connections\nThe connections in the cloud follows the M/M/c/K\nqueueing model or Erlang-B model [16]. In theM/M/c/K\nmodel, the request arrival is governed by a Poisson\nprocess at arrival rate λ and the service times are\nexponentially distributed with parameter µ and there\nare c servers in the cloud processing the requests from\nthe front of the queue. The variable K denotes the\ncapacity of the system. The buffer is considered to be\nof a finite size and connection requests greater that the\nqueue length are dropped. The model can be described\nas a continuous time Markov chain which is a type of a\nbirth death process. The server utilization, ρ, is written\nas [16]:\nρ =\nλ\ncµ\n, ρ < 1 (2)\nThe variable ρ should be less than one for the queue to\nbe stable otherwise the queue will grow without bound.\nThe probability that the system contain n connections\ncan be written as [16]:\npi0 =\n[ c∑\nn=0\nλn\nµnn!\n+\nλc\nµnc!\nK∑\nn=c+1\nλn−c\nµn−ccn−c\n]−1\n(3)\npin =\n{\n(λ/µ)n\nn! pi0, for n = 1, 2, ..., c\n(λ/µ)kn\ncn−cc! pi0, for n = c+ 1, ...,K.\n(4)\nwhere pin is the probability that the cloud system\ncontains n connections. The amount of time a connection\nspends in both the queue and in service is called the\nresponse time. The average response time is given as\n[16]:\nT = pi0\nρ(cρ)c\n(1− ρ)2c! +\n1\nµ\n(5)\nThen the probability that an arriving connection is\nblocked can be written using Erlang B formula as [16]:\nSigwele et al. Page 5 of 7\nPb =\nρc\nc!∑c\ni=0\nρi\ni!\n(6)\n4 Results and Analysis\n4.1 Simulation Parameters\nFour schemes are compared for perfomance evaluation;\n1) CAC scheme on distributed RAN systems with\nstand alone BBUs serving individual BSs from our\nprevious work in [1],\n2) CAC on C-RAN without fuzzy-logic applied from\nour previous work in [1],\n3) CAC with fuzzy-logic on C-RAN without pre-emption\nfrom our previous work in [2] and\n4) the proposed CAC with fuzzy-logic on C-RAN with\npre-emption in this paper.\nThe Matrix Laboratory (MATLAB) was used to\nsimulate the proposed framework. For simulation and\nperformance evaluation the following 4 traffic classes\nor service types were considered as shown in Table II\nfrom [17]:\n• VoIP as RT service\n• Conversational video (live streaming) as RT\nservice\n• ftp as NRT service\n• web browsing or www as NRT service\nThe MBR values are taken as the values for Ec.\nFour traffic classes are evaluated for simplicity but the\nproposed framework applies to multiple traffic classes.\nThe value of λ was varied with every simulation\nand 100 calls were generated for each traffic class.\nThe simulation time was kept at 500 seconds. The\nmembership function for the inputs and output of the\nfuzzy controller are shown in Fig. 4. It is assumed\nthat the network operator operating the private cloud\nenter into an agreement with the public cloud operator\nwhich involves the Service Level Agreement (SLA)\nwhich involves the cost. The cost of accepting a\nconnection request in the public cloud is assumed to\nbe 10% of what the private C-RAN operator will make\nwhen processing the request. It should be noted that\nthe request size, duration and QoS can form the basis\nof how much the request can be charged but this will\nbe considered in future, but in this paper, only 10% is\ndeducted by the public cloud.\n4.2 Simulation Results\nFig. 5, Fig. 6 and Fig. 7 shows a comparison of the\ncombination of the input terms Ec,Ac, St and the\noutput term Ad when the fuzzy rules in table 1 are\napplied. The figure shows that as the value of Ec\nincreases, the admittance (Ad) decreases meaning that\nwhen the value Ec of a particular service is Low,\nthe admittance becomes Accept and as the value of\nEc increases, the admittance becomes Preempt. Also,\nfor available capacity (Ac), the figures shows that\nwhen Ac isNotEnough, the admittance value becomes\nhigher meaning that there is Preemption of NRT\nconnections. As the value of Ac increases (to Enough),\nthe admittance tend to Accept. Finally for service type\n(St), when the value of St increases from NRT to RT ,\nthe admittance decreases from Preemption to Accept\nsince the NRT requests are pre-empted and the RT\nconnections are accepted.\nFig. 8 shows the blocking probability versus offered\ntraffic load. The figure shows that for all the schemes,\nas the offered traffic increases, the blocking probability\nalso increases. The CBP of the CAC distributed RAN\nis higher than all the other schemes because the\nbaseband computing power is limited as each cell is\ncovered by a single BBU with limited capacity. The\nblocking probability of CAC C-RAN with no fuzzy\nscheme also performs poorly with blocking probability\ngreater than threshold at 40% offered traffic load\ndue to improper and uncertain decision making of\nthe admission control scheme without fuzzy-logic.\nThe fuzzy C-RAN without pre-emption performs well\nup to 90% traffic load compared to the previous\ntwo schemes because fuzzy-logic avoids imprecisions\nand uncertainties when performing admission control.\nThe fuzzy C-RAN with pre-emption scheme performs\nbetter than all the rest with 100% traffic below\nblocking probability threshold of 5% because, instead\nof connection requests being blocked, they are forwarded\nto a public cloud as such more connections are\naccepted in the system.\nFig. 9 shows the resource utilization in the private\nC-RAN cloud for different traffic arrival rates. The\nfigure shows that as the arrival rate increases, the\nresource utilization in the cloud also increases because\nmore requests are being processed and occupies the\navailable capacity. The fuzzy C-RAN with pre-emption\nand the fuzzy C-RAN without pre-emption scheme\nhave the same but higher resource utilization than\nall the other schemes because the BBUs in the cloud\nare shared and a single BBU can process requests\nfrom multiple cells. It can be noticed that pre-emption\nhave no impact on resource utilization. The CAC\nC-RAN with no fuzzy scheme has high utilization\nthan the CAC distributed RAN scheme because in\nthe latter, BBUs are stand alone and BBU processing\nresources are not shared to address varied traffic needs\nin the cell area. Fig. 10 shows the response time\nversus offered traffic load for C-RAN system. The\nfigure shows that as the offered traffic increases, the\nresponse time increases because more requests takes\nmore time to be processed. The figure shows that the\nSigwele et al. Page 6 of 7\npre-empted NRT connections takes more time to be\nprocessed because they are forwarded to the public\ncloud which incurs more delays, but this does not\naffect the NRT pre-empted connections because they\nare delay tolerant. The new RT connections are delay\nsensitive and they have small response time because\nthey are processed in the private cloud and not in the\npublic cloud.\nFig. 11 shows the operators revenue for peak traffic\nperiods. At peak traffic periods, the CAC distributed\nRAN scheme has a blocking probability of 0.5 which\nmeans the revenue is 50%, where the lower revenue is\ndue to higher blocking probability. The CAC C-RAN\nwith no fuzzy scheme has a blocking probability of\n20% at peak traffic leading to a revenue of 80%.\nThe fuzzy C-RAN without pre-emption scheme has a\nblocking probability of 10% at peak tarffic leading to\n90% revenue for the operator while the fuzzy C-RAN\nwith pre-emption scheme has the highest revenue than\nall the schemes which is 95% since more requests\nare accepted in both the private and public cloud.\nFig. 12 shows the the total network throughput for\ndifferent traffic loads and it can be shown that for\nboth schemes, as the traffic load increases, the network\nthroughput also increases. The throughput is for the\nentire network and it is calculated at the BBU pool\nand it is expected to be larger. The fuzzy C-RAN with\npre-emption scheme has a higher throughput than\nall the other schemes with 900Mbps and 1600Mbps\nduring low and peak traffic respectively and the\nscheme is 28.6% effective compared CAC distributed\nRAN scheme. This is because more connections are\nbeing accepted as more computing resources are being\nprovided by the public cloud using the cloud bursting\ntechnique. The fuzzy C-RAN without pre-emption\nhas a throughput of 880Mbps and 1550Mbps during\nlow and peak traffic respectively and outperforms the\nCAC distributed RAN scheme by 25.7%. The CAC\nC-RAN with no fuzzy performs better than the CAC\ndistributed scheme by 8.6%. The CAC distributed\nRAN performs poorly that the rest of the schemes with\n700Mbps and 1400Mbps during low and peak traffic\nrespectively because it has high blocking probability\ndue to limited baseband computing resources.\n5 Conclusion\nIn this paper, a fuzzy-logic based call admision\ncontrol (CAC) scheme is proposed in fifth generation\n(5G) cloud radio access networks (C-RAN). The\nfuzzy-logic avoids uncertainities caused by traditional\nCAC schemes in distributed RAN systems. A cloud\nbursting technique used proposed where during congestion,\nlow priority delay tolerant none realtime (NRT)\nconnections are pre-empted and outsourced to a public\ncloud at a certain price penalty. The simulation results\nshows that the proposed scheme has low blocking\nprobability which is within blocking probability threshold\nlimit of 5%. The proposed scheme has a return revenue\nof 95%.\nCompeting interests\nThe authors declare that they have no competing interests.\nAuthor’s contributions\nAll the authors have contributed significantly to this research articles.\nBelow are the author’s contributions:\ni) Mr Tshiamo Sigwele have contributed significantly to this paper both\non comming up with mathematical models, running simulations and\nwritting up the paper.\nii) Dr Prashant Pillai has contributed significantly on the aspects of\nsupervision, the organization of the paper, reviewing and the validation\nof the proposed framework.\niii) Dr Atm Shafiul Alam has also contributed significantly in the related\nwork section and also helped in drawing the diagrams in this paper.\niv) Prof Yim Fun Hu has also contributed significantly in this work by\nproof reading the work and validating the proposed framework and\nalso restructuring the paper.\nAcknowledgements\nThe authors would like to thank MoESDP-DTEF (Ministry of Education\nSkills & Development Planning, Department of Tertiary Education and\nFinancing, Botswana, Africa) for financing this research.\nAuthor details\n1Faculty of Engineering and Informatics, University of Bradford, BD7 1DP\nBradford, UK. 2Faculty of Technology, Design and Environment, Oxford\nBrookes University, Oxford, UK. 3Institute of Communication Systems, 5G\nInnovation Centre University of Surrey, Du¨sternbrooker Weg 20, GU2 7XH\nGuildford, UK.\nReferences\n1. T Sigwele, P Pillai, Y Hu, Call Admission Control in Cloud Radio\nAccess Networks, in IEEE Future Internet of Things and Cloud, 2014\n2. T Sigwele, P Pillai, Y Hu, Elastic call admission control using fuzzy\nlogic in virtualized cloud radio base stations, in International\nConference on Wireless and Satellite Systems,2015.\n3. M Farahabady, YC Lee, AY Zomaya, Pareto-optimal cloud bursting.\nIEEE Transactions on Parallel and Distributed Systems. 25, 2670–2682\n(2014).\n4. MH Ahmed, Call admission control in wireless networks: A\ncomprehensive survey. IEEE communications Surveys and Tutorials.\n7(1-4), 50–69 (2005).\n5. D Niyato, E Hossain, Call admission control for QoS provisioning in 4G\nwireless networks: issues and approaches. IEEE network. 19(5), 5–11\n(2005).\n6. Q Liang, NN Karnik, JM Mendel, Connection Admission Control in\nATM networks using survey-based type-2 fuzzy logic systems. IEEE\nTransactions on Systems, 30(3), 329–339 (2000).\n7. Y Liu, M Meng,Survey of admission control algorithms in IEEE\n802.11e wireless LANs, in Future Computer and Communication, 2009.\n8. V Kolici, T Inaba, A Lala, G Mino, S Sakamoto, L Barolli, A\nfuzzy-based cac scheme for cellular networks considering security, in\nNetwork-Based Information Systems, 2014.\n9. L Barolli, A Koyama, T Yamada, S Yokoyama, T Suganuma, N\nShiratori, A fuzzy admission control scheme for high-speed networks,\nin 12th International Workshop on Database and Expert Systems\nApplications, 2001.\n10. C.T Ovengalt, K Djouani, A Kurien, A fuzzy approach for call\nadmission control in lte networks. Procedia Computer Science. 32,\n237–244 (2014).\n11. L Barolli, M Durresi, K Sugita, A Durresi, A Koyama, A cac scheme\nfor multimedia applications based on fuzzy logic, in 19th International\nConference on Advanced Information Networking and Applications,\n2005.\nSigwele et al. Page 7 of 7\n12. D Karabudak, C Hung, B Bing, A call admission control scheme using\ngenetic algorithms, in Proceedings of the 2004 ACM Symposium on\nApplied Computing, 2004.\n13. RG Cheng, CJ Chang, LF Lin, A QoS-provisioning neural fuzzy\nconnection admission controller for multimedia high-speed networks.\nIEEE/ACM Transactions on Networking. 7(1), 111–121 (1999).\n14. X Yang, J Bigham, A call admission control scheme using\nneuroevolution algorithm in cellular networks, in International Joint\nConference on Artificial Intelligence, 2007.\n15. S Acs, M Kozlovszky, P Kacsuk, A novel cloud bursting technique. in\nApplied Computational Intelligence and Informatics, 2014.\n16. AO Allen, Probability Statistics and Queueing Theory, 1edn.\n(Academic Press, 2014).\n17. K Leonhard, How to dimension user traffic in 4G networks. (Slideshare,\n2014), http://physicsweb.org/articles/news/11/6/16/1. Accessed 1\nJune 2017.\nTables\nTable 1 Fuzzy Rule Base for Fuzzy Controller.\nRule St Ac Ec Ad\n1 RT Not Enough Low Outsource\n2 RT Not Enough Medium Reject\n3 RT Not Enough High reject\n4 RT Enough Low Accept\n5 RT Enough Medium Accept\n6 RT Enough High Accept\n7 NRT Not Enough Low Outsource\n8 NRT Not Enough Medium Outsource\n9 NRT Not Enough High Outsource\n10 NRT Enough Low Accept\n11 NRT Enough Medium Accept\n12 NRT Enough High Accept\nFigures\nFigure 1 C-RAN Architecture. A figure showing the C-RAN\narchitecture.\nFigure 2 Proposed System Model.A figure showing the\nproposed system model.\nTable 2 Simulation Parameters [17].\nQCI Service Type Priority Delay PER MBR/Ec\n1 VoIP GBR 2 100ms 10−2 12Kbps\n3 Conversational video GBR 4 150ms 10−3 240Kbps\n8 ftp non-GBR 8 300ms 10−6 512Kbps\n9 www non-GBR 9 300ms 10−6 512Kbps\nFigure 3 Cloud bursting model.A figure showing the cloud\nbursting technique when RT and NRT connections arrive into\nthe congested BBU pool.\nFigure 4 Membership functions for (a) Effective capacity,\nEc (b) Service type, St (c) Available capacity, Ac (d)\nAdmittance, Ad.A figure showing the the membership\nfunctions of various fuzzy-logic terms.\nFigure 5 Comparison for inputs St, Ec and the output\nAd.A figure showing the comparison of fuzzy inputs and\noutput.\nFigure 6 Comparison for inputs Ec,Ac and the output Ad.\nA figure showing the comparison of fuzzy inputs and output.\nFigure 7 Comparison for the inputs St,Ac and the output\nAd.A figure showing the comparison of fuzzy inputs and\noutput.\nFigure 8 Blocking probability versus offered traffic load.\nFigure 9 System utilization versus arrival rate.A figure\nshowing how the system utilization in the BBU pool varies\nwith the change in arrival rate of requests.\nFigure 10 Response time versus offered traffic load.A figure\nshowing how the service response time/delay varies with the\nincrease in the offered traffic load.\nFigure 11 Operator revenue.A figure showing the operator’s\nincome from hosting connection requests from users.\nFigure 12 System throughput.A figure showing the total\nthroughput of the C-RAN system\n", "fullTextIdentifier": "https://bradscholars.brad.ac.uk/bitstream/10454/13203/1/Fuzzy%20Logic%20Based%20CAC%20in%205G%20Cloud%20RAN%20with%20Pre-emption.pdf", "identifiers": [], "issn": "1687-1499", "journals": [], "language": null, "magId": "2755030560", "oai": "oai:bradscholars.brad.ac.uk:10454/13203", "pdfHashValue": "818b244102ee3d51a469f4653d1f9666d79a3f72", "publisher": "'Springer Science and Business Media LLC'", "relations": [ "https://doi.org/10.1186/s13638-017-0944-x" ], "subjects": [ "Article", "Published version" ], "title": "Fuzzy-Logic Based Call Admission Control in 5G Cloud Radio Access Networks with Pre-emption", "topics": [ "Call admission control; Cloud Radio Access Network; Cloud RAN; Pre-emption; Fuzzy-logic; 5G" ], "urls": [ "http://hdl.handle.net/10454/13203" ], "year": 2017 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000005
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "This study uses a narrative analytic approach to explore the similarities and differences between pre-Enlightenment and post-Enlightenment firsthand accounts of madness in order to answer the question; what is the relationship between madness, narrative, understanding, identity and recovery? Drawing on the work of Foucault, the research traces the historical and cultural development of conceptualisations of reason and unreason, the rise of psychiatry and the marginalisation of the voice of madness. I argue that this marginalisation is continued in narrative research where the focus is on the stories of the physically ill, rather than madness. The narrative method provides a means of giving space to these marginalised voices and it is Bakhtin¿s constructs of dialogicism, polyphony, unfinalizability and the chronotope that provide the tools for the narrative analysis of two female English writers; Margery Kempe and Mary Barnes. The analysis highlights three critical issues in relation to firsthand narratives of madness. First, the blurred boundaries between madness and mysticism and the role of metaphor in understanding distressing experiences. Second, the complex, multi-dimensional nature of subjective timespace that challenges the linear assumptions underlying both narrative and recovery, which, I argue, demands a radical reconceptualisation of both constructs. Third, the liminal social positioning within the analysed accounts is closely related to Bakhtin¿s notion of unfinalizability, a form of being that enables the search for meaning and the transformation of the self. Insights can be gained from this research that may place stories and understanding central in contemporary healthcare.School of Health Studies at the University of Bradford", "authors": [ "Torn, Alison" ], "contributors": [ "Burkitt, Ian", "Thomas, Phil" ], "coreId": "136417", "datePublished": "2009-01-01T00:00:00", "doi": null, "downloadUrl": "https://core.ac.uk/download/136417.pdf", "enrichments": { "citationCount": null, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": " \n \nMadness and Narrative Understanding \n \n \n \nA Comparison of Two Female Firsthand Narratives of Madness in the Pre \nand Post Enlightenment Periods \n \n \n \n \n \n \nAlison TORN \n \n \n \n \n \n \n \nSubmitted for the degree of Doctor of Philosophy \n \n \n \nSchool of Health Studies \n \n \nUniversity of Bradford \n \n \n \n \n \n \n2009 \n", "fullTextIdentifier": "https://bradscholars.brad.ac.uk/bitstream/handle/10454/3352/1Title%20page.pdf?sequence=2&isAllowed=y", "identifiers": [], "issn": null, "journals": [], "language": { "code": "en", "name": "English", "id": 9 }, "magId": null, "oai": "oai:bradscholars.brad.ac.uk:10454/3352", "pdfHashValue": "96a7ce497edd3a68207103f11f976a28dca04929", "publisher": "School of Health Studies", "relations": [], "subjects": [ "Thesis", "doctoral", "PhD" ], "title": "Madness and narrative understanding: A comparison of two female firsthand narratives of madness in the pre and post enlightenment periods.", "topics": [ "Madness", "Narrative", "Dialogue", "Chronotype", "Identity", "Recovery", "Margery Kempe", "Mary Barnes" ], "urls": [ "http://hdl.handle.net/10454/3352" ], "year": 2009 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000006
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "noThis article observes separately and jointly the impact of international financial reporting standards (IFRS) and/or board of directors’ independence on accounting conservatism in FTSE 100 nonfinancial firms between 2002 and 2007. Using Givoly and Hayn’s (2000) accrual-based measure of accounting conservatism, we found a reduction in conservatism after the mandatory adoption of IFRS, and, also, that board of directors’ independence improved accounting conservatism. Moreover, IFRS and board of directors’ independence had a complementary impact on accounting conservatism since the role of independent directors was not observable prior to the mandatory adoption of IFRS. Our results suggest that, after the mandatory adoption of IFRS, independent directors are likely to put significantly more pressure on the management to practice more accounting conservatism", "authors": [ "Elshandidy, Tamer", "Hassinen, A." ], "contributors": [], "coreId": "153515244", "datePublished": "2014-06-03T00:00:00", "doi": "10.1080/09603107.2014.924291", "downloadUrl": null, "enrichments": { "citationCount": 4, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": null, "fullTextIdentifier": null, "identifiers": [], "issn": "0960-3107", "journals": [], "language": null, "magId": "2083652737", "oai": "oai:bradscholars.brad.ac.uk:10454/12864", "pdfHashValue": null, "publisher": "'Informa UK Limited'", "relations": [ "http://dx.doi.org/10.1080/09603107.2014.924291" ], "subjects": [ "Article", "No full-text in the repository" ], "title": "Do IFRS and board of directors’ independence affect accounting conservatism?", "topics": [ "Board of directors’ independence; Corporate governance; Accounting; Conservatism; IFRS" ], "urls": [ "http://hdl.handle.net/10454/12864" ], "year": 2014 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000007
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "YesWhat is the nature and extent of engagement within peace research with the unfolding\r\nglobal environmental crisis, as captured in discourses about the ‘Anthropocene’(Bonneuil &\r\nFressoz, 2017; Dalby, 2015)? Is the peace research scholarly community connecting with\r\nsignificant debates taking place in the earth sciences or among social and political\r\nmovements? If it is, in what ways? Are concepts of violence and peace evolving in line with\r\nthe major trends driving change this century, including climate change? This article seeks\r\nanswers to these questions through a systematic survey and thematic analysis of publications\r\nin key peace-related journals and book series.What is the nature and extent of engagement within peace research with the unfolding\r\nglobal environmental crisis, as captured in discourses about the ‘Anthropocene’(Bonneuil &\r\nFressoz, 2017; Dalby, 2015)? Is the peace research scholarly community connecting with\r\nsignificant debates taking place in the earth sciences or among social and political\r\nmovements? If it is, in what ways? Are concepts of violence and peace evolving in line with\r\nthe major trends driving change this century, including climate change? This article seeks\r\nanswers to these questions through a systematic survey and thematic analysis of publications\r\nin key peace-related journals and book series", "authors": [ "Kelly, Rhys H.S." ], "contributors": [], "coreId": "326518585", "datePublished": "2020-06-17T15:41:52", "doi": null, "downloadUrl": null, "enrichments": { "citationCount": null, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": null, "fullTextIdentifier": null, "identifiers": [], "issn": null, "journals": [], "language": null, "magId": null, "oai": "oai:bradscholars.brad.ac.uk:10454/17882", "pdfHashValue": null, "publisher": null, "relations": [], "subjects": [ "Article", "Accepted manuscript" ], "title": "Avoiding the Anthropocene: An Assessment of the Extent and Nature of Engagement with Environmental Issues in Peace Research", "topics": [ "Peace research", "Climate change", "Ecological crisis", "Anthropocene", "Environmental discourse", "Social cartography" ], "urls": [ "http://hdl.handle.net/10454/17882" ], "year": 2020 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000008
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "n", "authors": [ "Capstick, Andrea", "Ludwin, Katherine", "Chatwin, John" ], "contributors": [], "coreId": "76945302", "datePublished": "2014-01-01T00:00:00", "doi": null, "downloadUrl": null, "enrichments": { "citationCount": null, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": null, "fullTextIdentifier": null, "identifiers": [], "issn": null, "journals": [], "language": null, "magId": null, "oai": "oai:bradscholars.brad.ac.uk:10454/7381", "pdfHashValue": null, "publisher": null, "relations": [ "http://www.profbriefings.co.uk/involve2014/involve14programme.html" ], "subjects": [ "Conference paper", "No full-text available in the repository" ], "title": "Using Participatory Video to enhance involvement for people with dementia.", "topics": [ "Participatory video; Dementia care; Involvement" ], "urls": [ "http://hdl.handle.net/10454/7381" ], "year": 2014 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
000000009
hf://datasets/laion/open-access-papers@1409f48923cf9afefb2c7268cc33561062112592/data/shard-000.tar.gz
{ "abstract": "NoThe purpose of this paper is to describe an integrated manufacturing strategy for the deployment of a CAD/CAM system in a small, medium manufacturing enterprise (SMME). A case study of a SMME is utilised in deploying an integrated CAD/CAM system for practical application of manufacturing technology for achieving sustainable growth through lean systems design (LSD). The paper presents a techno‐economic and technology change management framework, with an application of a holistic set of lean deployment tools that include establishing a strategic and operational plan for implementing CAD/CAM systems as a means to achieving world‐class performance. The paper shows that the CAD/CAM integration within the case company increased knowledge of CAD/CAM technology, productivity, and flexibility whilst reducing throughput times. Based on the literature review and the current case study, a framework for ideal CAD/CAM implementation has been proposed. The paper also shows that management and organisational structures are key inhibitors for successful implementation of technology integration", "authors": [ "Esan, Adedeji O.", "Khan, M. Khurshid", "Qi, Hong Sheng", "Naylor, C." ], "contributors": [], "coreId": "153513826", "datePublished": "2013-01-01T00:00:00", "doi": "10.1108/17410381311292331", "downloadUrl": null, "enrichments": { "citationCount": 7, "documentType": { "confidence": null, "type": null }, "references": [] }, "fullText": null, "fullTextIdentifier": null, "identifiers": [], "issn": "1741-038X", "journals": [], "language": null, "magId": "1996274381", "oai": "oai:bradscholars.brad.ac.uk:10454/9652", "pdfHashValue": null, "publisher": "'Emerald'", "relations": [ "http://dx.doi.org/10.1108/17410381311292331" ], "subjects": [ "Article", "No full-text available in the repository" ], "title": "Integrated manufacturing strategy for deployment of CADCAM methodology in a SMME", "topics": [ "CADCAM methodology; SMME; Integrated manufacturing strategy" ], "urls": [ "http://hdl.handle.net/10454/9652" ], "year": 2013 }
[ 49, 48, 46, 116, 97, 114, 46, 120, 122 ]
End of preview.

Dataset Card for CORE Open Access Paper Dataset

Dataset Summary

This dataset contains open access academic papers collected from CORE (core.ac.uk). It includes metadata and information about various academic publications across different disciplines.

Languages

The dataset is primarily in English, but may contain papers in other languages as well.

Dataset Structure

Data Instances

Each instance in the dataset represents an academic paper and contains the following information:

  • __key__: A unique identifier for the paper (e.g., "000000000")
  • __url__: The URL of the shard file containing the paper data
  • json: A JSON string containing detailed metadata about the paper, including:
    • doi: Digital Object Identifier
    • coreId: CORE identifier
    • oai: Open Archives Initiative identifier
    • title: Paper title
    • authors: List of authors
    • datePublished: Publication date
    • abstract: Paper abstract
    • publisher: Publisher information
    • year: Publication year
    • topics: List of topics covered in the paper
    • subjects: Subject categories
    • urls: Related URLs
  • path: Path to the compressed file containing the full text (if available)

Dataset Creation

Curation Rationale

Dataset is a streamable WebDataset format for easy dataset handling.

Source Data

Initial Data Collection and Normalization

The dataset was obtained from CORE (core.ac.uk), which aggregates open access research outputs from repositories and journals worldwide.

Who are the source language producers?

The source language producers are researchers and academics who have published open access papers that have been indexed by CORE.

Personal and Sensitive Information

The dataset contains information about academic papers, including author names and affiliations. Users of the dataset should be aware of and respect any copyright or usage restrictions associated with the papers.

Considerations for Using the Data

Social Impact of Dataset

This dataset can potentially accelerate research by providing easy access to a large corpus of open access academic papers across various disciplines.

Discussion of Biases

The dataset may reflect biases present in academic publishing, such as language biases (favoring English-language publications) or geographic biases based on the sources indexed by CORE.

Other Known Limitations

The completeness and quality of metadata may vary across papers in the dataset.

Additional Information

Licensing Information

The dataset is provided under the MIT License.

How to Use

The dataset can be read using WebDataset. Here's an example of how to load the data:

import webdataset as wds

ds = wds.WebDataset("./data/shard-{000..123}.tar.gz")

This will load shards from shard-000.tar.gz to shard-123.tar.gz.

Downloads last month
12