<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Neil Farrugia</style></author><author><style face="normal" font="default" size="100%">Joseph Vella</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Automating Footwear Impressions Retrieval through Texture</style></title><secondary-title><style face="normal" font="default" size="100%">Information &amp; Security: An International Journal</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">digital forensics</style></keyword><keyword><style  face="normal" font="default" size="100%">digital image processing</style></keyword><keyword><style  face="normal" font="default" size="100%">footwear impressions</style></keyword><keyword><style  face="normal" font="default" size="100%">texture-based similarity</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><volume><style face="normal" font="default" size="100%">43</style></volume><pages><style face="normal" font="default" size="100%">73-86</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;This study aims to provide an automatic footwear extraction and correlation system. The artefact proposed is able to apply pre-processing, extract key features and retrieve the relevant matches from a footwear impression repository.&lt;/p&gt;&lt;p&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; In order to compare images, a comparison function was utilised. This function creates a MPEG-1 movie out of the images and employs the size of the movie in order to calculate the similarity. For pre-processing of the prints, apart from common techniques, an original concept of tessellations was applied. A publicly available dataset of footprints, a subset of which come from crime scenes, was used. The results obtained from the development of this project have shown that the accuracy of matching depends on the quality of the images that are being used. Comparisons are done in two batches: first, all crime scene prints (i.e. 170) are compared with all the reference prints (i.e. 1175), then the procedure is repeated with various pre-processing methods being applied to the input prints. Accuracy averaged at 55 % (without pre-processing) and at 65 % for a particular method of pre-processing (i.e. based on 43 prints).&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue><section><style face="normal" font="default" size="100%">73</style></section></record></records></xml>