• 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2020-03
  • 2020-07
  • 2020-08
  • 2021-03
  • br Table br Overall MAP throughout the


    Table 6
    Overall MAP throughout the (1st–8th) iterations. Bold data correspond to the best results.
    Dataset QPM QEX SVM-AL MARRow
    Figs. 3–7 show the results comparing our proposed approach (MARRow) against CBIR-T, QPM, QEX and SVM-AL approaches over
    (a) first, (b) third, (c) fifth and (d) eighth iterations, using the datasets I1–I5, respectively. The precision obtained by the tradi-tional CBIR (CBIR-T) approach was illustrated as a baseline in all graphs. MARRow presented higher precisions than the state-of-the-art approaches, considering all iterations and datasets. We can clearly note that MARRow also presented a well-suited and consis-tent increasing rate along the iterations.
    For instance, in Fig. 3(a), MARRow presented a precision up to 1.8 and 2.4 times better when compared with QPM and SVM-AL, respectively. Besides, it reached precision gains of 87.3% at a recall level of 50% against QEX. Considering the first recall levels, our ap-proach also presented good results against QEX, for instance, 7.6%, 45%, and 65.3%, at recalls from 10% to 30%, respectively.
    The same behavior can be observed with the other datasets (Figs. 4–7), which represent different scenarios of complexity. As the complexity increases, they present considerably di cult calcium magnesium salt (see Section 3.1), due to the intrinsic inter-class similarity, which leads to a harder separation between relevant and irrelevant im-ages, and a fine-grained annotation process. Despite these issues, MARRow presented the best results in comparison with the other approaches. Analyzing the first iterations (Figs. 5–7), all approaches almost ties. However, at further iterations, MARRow presented a better and more consistent precision growth. This is a key ingredi-ent of MARRow. While other approaches reach a saturation point, MARRow is capable to mitigate this problem throughout the itera-tions. It is possible to note that more naive approaches (e.g. QPM) presented a stronger saturation plateau (i.e. saddle point).
    Summarizing the results, Table 6 presents the overall MAP obtained by each approach, throughout the (1st to 8th) learning iterations. According to our extensive experimental evaluation, MARRow presented the best precisions for all datasets. Through our approach it was also possible to minimize the computa-tional time of the learning process, once it reduces the expert’s involvement in the analysis and annotation process (reducing up to 88%). This reduction occurs because the expert does not need to annotate (correct) the labels of all samples, as required by the literature works. Our approach enables to obtain a more robust classifier (i.e. it has fewer misclassifications, as can be seen from the presented results, e.g. see Table 6), as more informative samples are selected for its learning.
    4. Conclusion
    In this paper, we proposed the MARRow (Medical Active leaRning and Retrieval) approach, which aggregates AL and RF methods in the medical image domain. We also proposed a new AL strategy that was capable to be seamless integrated into the CBIR core process in order to mitigate several drawbacks, regard-ing the e cacy and e ciency of such domain. This is because the proposed AL strategy selects a small set of more informative im-ages, considering selection criteria based on not only similarity, but also on certain degrees of diversity and uncertainty. These selected images can bring benefits rather than those from the same class usually considered by literature works. 
    From the experiments, it is straightforward to notice that our approach not only improves in a great extent the precision of the medical similarity queries, but also boosts the e ciency of the process. Our approach overcomes the other state-of-the-art ap-proaches, reaching precision gains of up to 87.3%. The results tes-tify that MARRow is feasible to be applied in challenging processes, such as medical image analysis. As future works, we intend to pro-pose other AL strategies, in order to improve the selection of the most informative samples and then the quality of the retrieved images.
    Conflict of interest
    This work has been supported by National Council for Scientific and Technological Development - CNPq (grants #431668/2016-7, #422811/2016-5); Coordination for the Improvement of Higher Ed-ucation Personnel - CAPES; Fundação Araucária; SETI; and UTFPR.
    [12] D.C.N.W. Uluwitige, S. Geva, G. Zuccon, V. Chandran, T. Chappell, Effective user relevance feedback for image retrieval with image signatures, in: Australasian Document Computing Symposium, ACM, 2016, pp. 49–56.
    [17] B. Settles, Active Learning Literature Survey, Technical Report, Computer Sci-ences Technical Report 1648, University of Wisconsin–Madison, 2009.