Journal of Health and Medical Sciences
Volumen 4, Fascículo 2, 2018
Artículo de Investigación
Method for Analysis of Skin Lesions Suspicious of Melanoma
  • Jessica Rojas Rosales; Marlén Pérez Díaz & Alberto Taboada Crispí

 

La correspondencia debe dirigirse a Marlen Pérez Díaz E-mail: mperez@uclv.edu.cu

Recibido el 22 de Diciembre del 2017. Aceptado el  07 de Marzo del 2018

ROJAS, J. R.; PÉREZ, M. D.; TOBOADA, A. C. Method for analysis of skin lesions suspicious of melanoma. J. health med. sci., 4(2):115-122, 2018.
ABSTRACT: Computational vision has been extensively used in biomedical applications, due to its fast and accuracy computational power with the present computers and mobile devices. A method for a fast detection of lesions suspicious of melanoma is proposed in the present work. Skin images were used in JPEG format with a resolution of 150 x 150 pixels: 35 with melanoma and 35 with non-melanoma, from two annotated databases. A median filter of 7 x 7 pixels was applied to the green channel and a segmentation procedure using the Otsu’s method was applied to use the images as reference for the extraction of 13 features based on the ABCD rule of dermatology (Asymmetry, Border, Color and Diameter). The 13 features implemented were: the lesion perimeter (p) and area (A), its ratio (p/A), perimeter and area of the bounding box, variance and mean of the lesion, as well as the ratios among standard deviations, medians and areas from the right part vs. the left part of the lesion and from the upper part vs. the lower part. These features were introduced at the input of a neural network with 5 neurons in the hide layer and 2 in the output used as classifier. After training and validation of the neural network, we have obtained similar or better results in accuracy, sensitivity, specificity, positive and negative predictability than other methods previously reported. The classification rates were higher than 90%.

KEYWORDS: computer vision, melanoma detection, digital image processing, neural network.

INTRODUCTION


Computer vision is a relatively new field of research with fast growing. The main ideas have emerged from dissimilar areas as artificial intelligence, psychology, computer graphics and image processing. On the other hand, the cost of computational vision systems have been reduced and the companies are identifying new applications and products today in fields like Biomedicine and Robotics (Will, 2014). Artificial vision is one of the areas of the artificial intelligence with many improvements, including implementations on mobile devices. Object detection, localization or evaluation have been converted in common and fast tasks using modern techniques of pattern recognition from images.
Malignant melanoma is today one of the main skin lesion which affects the white population all over the world. The only way to certify the presence of melanoma is extracting the tissue and reviewing it by experts to look for the malignant cells. Nevertheless,
the high incidence of this type of lesion has induced the researchers to look for automatized diagnostic techniques or methods, which can help to detect the lesion in initial stages of the disease. The skin cancer is almost 100% curable when the detection is early enough and it is removed by surgery, while the survival rate to melanoma is almost of 70 % if the disease is early recognized (Shetty & Turkar, 2012).
The rule ABCD of dermatology (Amaliah, 2011) gives some hints to classify the lesion as benign, suspicious or malignant. The criteria are defined by the ABCD rule, which take into account asymmetry, borders, color, and dimension of the lesion. In this case, the classification is based on a medical criterion: an asymmetric dark lesion with irregular borders, whose dimension is continuously growing, has a high probability to be a melanoma and must be biopsied (Ali & Deserno, 2012). This method is employed all over the world and it is subjective, although the four features are included in a linear equation. There are other methods which offer also a qualitative lesion evaluation, taking into account a list of answers, as Menzies’s method or the 7-point list (Tyagi et al., 2012). All of them have contributed to detect malignancy early.
On the other hand, many papers have been dedicated to classification methods (Saravanan & Sasithra, 2014; Pradipnaswale & Ajmire, 2016). Fussy systems, genetic algorithms or deep learning are some of the possibilities. Particularly, neural networks have been successful (Karabulut & Ibrikci, 2016; Abuzaghleh et al., 2015; Gutman et al., 2016) for classification of skin’s lesions. In Abuzaghleh et al., for example, a quantitative method derived from ABCD rule is programmed for computers with very good results.
The proposal of a method for classification of skin’s lesions is presented in this work. The steps contain a group of procedures and tools for early melanoma detection. They have been programed for a computer, using MATLAB. The method includes the use of a neural network as classifier, whose input includes the selection of a group of features. These features are more inclusive than the qualitative ABCD rule, Menzies’s method or the 7-point list. This proposal is more inclusive because it characterizes the injury from many more features to define the properties of asymmetry, border, color, and dimension in terms of area, perimeter, etc., which are mathematically computable, instead of being purely qualitative. This is the main novelty of this work. The present method is quantitative.

MATERIAL AND METHOD


The method (named Melanoprot) includes procedures, and some auxiliary tools, to classify a suspicious lesion in melanoma or no-melanoma. Fig.1 shows the diagram of the proposed method.

Fig. 1. Melanoprot Block Diagram.

The first step is to identify visually a suspicious lesion. A picture from the lesion should be taken with the camera of a mobile. This image pass a preprocessing procedure on a computer, looking for the accomplishment of illumination and orientation requirements for segmentation, and also for decreasing the noise level and avoiding the presence of hairs. Median filter was programmed using MATLAB (Huang et al., 1979), to pre-process each image from the database used. Different window sizes were tested by a trial-and-error test to select the proper values for using with the preprocessing median filter, looking for the lowest noise level. After the pre-processing procedure, the image can be segmented. In the case of nonadequate image segmentation, another picture should be taken with a mobile and the steps are repeated until this point.
For the segmentation procedure (the next method step), it is important to select the correct channel for RGB images (red, green or blue channels), which is the channel in which the peaks belonguing to the useful signal (lesion) and background are better separated. It means that image contrast is the best. After that, a threshold is found using the Otsu’s paradigm (Otsu, 1979) also programmed in MATLAB. It consists in determining the variance among classes from the image histogram. The dispersion in a class should be as low as possible, while separation among classes should be the highest. The quotient among these variances is calculated. The threshold is the value for which the quotient is maximum. With the Otsu’s threshold, the image is transformed to a binary image. The negative of the binary mask is calculated. The image is shifted in the horizontal and vertical axis. The shifted image is modularly subtracted from the original image. With this operation, an edge image is obtained. The lowest bounding box, which contains the segmentation of the original lesion, is determined to avoid the undesirable edge effects, as white pixels isolated in the corners of the box, which can affect the lesion analysis. The MATLAB function ind2sub was used for it. After that, the centroid of the bounding box was calculated from the median value of all the pixels pairs (i,j). The centroid was always centered in a window of 150 x 150 pixels. In the next step of the proposed method, MATLAB was used to calculate a group of features before to run the classification procedure. Thirteen features have been proposed. They are also related to the analysis of asymmetry, uniformity in borders and color, and lesion dimension, but in a more complete way than previous subjective contributions (Amaliah; Ali &Deserno). The features proposed are:


1. The lesion perimeter, p. All the pixels with values equal to 1 are added to calculate the perimeter of the edge image after segmentation. This feature is related to the analysis and exact quantification of the lesion size.

2. The lesion area, A. The area included inside the perimeter is calculated as other index of the lesion size. The same happens with features included in equations 1 to 3.

3. The ratio between the perimeter and the area, pa.

4. The perimeter of the bounding box, pbb.

 where m and n are the number of rows and columns of this matrix, respectively.

5. The area of the bounding box, Abb.

6. The variance of the grey levels of the lesion, varlun. This feature is related to the color uniformity. Lesions with non-uniformities are more suspicious to be a melanoma.

7. The median of the grey levels of the lesion, medlun. Similar aim than the above feature.

8. The maximum ratio of the standard deviations of the right part vs. the left part of the lesion, in %, sdlr. This feature is a good index to test about the asymmetry of the lesion and about the non-uniformities in borders. Non-symmetric lesions are more suspicious to be a melanoma. Equations 5 to 9 have also the aim of grading the asymmetry degree of the lesion.

9.The maximum ratio of the standard deviations of the upper part vs. the down part of lesion, in %, sdud.

10.The maximum ratio of the medians of the right part vs. the left part of the lesion, in %, mlr.

11.The maximum ratio of the medians of the upper part vs. the down part of the lesion, in %, mud.

12.The maximum ratio of the areas of the right part vs. the left part of the lesion, in %, Alr.

13. The maximum ratio of the areas of the upper part vs. the down part of the lesion, in %, Aud.

Finally, MATLAB Neural Network Pattern Recognition Tool was used to classify lesions. A neural network of 5 neurons in the hide layer and 1 in the output was used, following architecture criteria taken from the scientific bibliography (Da Silva et al., 2017). A MATLAB script was created to access the data. The data was saved in a matrix named mela_dataset. The function num2str run all the images consecutively from this matrix to calculate the 13 features for each one. They were saved in the vector melaInputs. With this input information, the neural network classified each lesion. A matrix (melaTargets) have been used with 2 rows and 70 columns, representing the classification classes (benign or malign) and the number of images for analysis, respectively.
For grading the proposed method an annotated database taken from the internet with 35 images from skin lesions has been used. The format was JPEG with 24 bits. The images preserved the three components R (red), G (green), and B (blue). These RGB images had 150 x 150 x 3 pixels, with a resolution of 0.264 mm/pixel. Another 35 images were added to complete the database, taken with the camera of a mobile device with the same resolution. This database was also annotated. The 70 images were mixed and divided in three independent groups, one for training (25.7 %), another for validation (24.3 %) and the last one for test (50 %) in a complete random way. No image was included in more than one group.
The weight of each feature was adjusted in iterations to minimize the classification error over the training phase, where the retro-propagation method was used. The errors were considered between the neural network output for each image and the desired output, which is known due to the database is annotated. In this case, the minimization was verified through the diminution of the error gradient respect to the weights error (Song et al., 2016). The weights were updated in each iteration and the solution was fount when the error was the minimum.
The mathematical formalism of this whole procedure is the following:This step starts with an input pattern p (skin lesion image) and a group of 13 features, which in some way take into account the pattern p.

In the present problem, there are 5 neurons in the hide layer of the neural network; it is necessary to analyze the net input received by the hidden layer j, (j = 1…5).

where θ j is the output threshold of the neurons and N =13.
The output value of the j neuron is obtained by an activation function f (a hyperbolic tangent in the proposed method).

The net input in the neuron of output k is:

The value of the output k is

where v is a coefficient and H = 5.
Over the training, the error minimization is achieved:

where is the desired output for the neuron with output k at the pattern p.
The general error of the method is:

where P = 70 in this experiment (70 images).
The weights are modified following the decrement of the error gradient.
Ep is a function of all weights. The gradient of Ep is a vector equal to the partial derivate of Ep on each weight. The negative direction of the gradient is that one allowing the fastest decrement of the error.

To evaluate the performance of the method, the Mean Square Error (MSE) is used.

where P = 70 and M = 13.

The classification function used here was a hyperbolic tangent whose values are -1 (for abnormal) or 1 (for normal). As the classification is binary, four possible results (true positive, false positive, true negative or false negative) can be obtained. Each image was classified in the output class, as 1 (Melanoma), or 2 (No-melanoma), whose value in the output node was the highest. If more than one output is high or nonoutput is high, a bad classification is obtained, which produces a false positive or a false negative.
The method was implemented in a CELERON computer with 4 GB of RAM and a frequency of 2.80 GHz.

RESULTS


The method was tested with 70 images. Over the pre-procesing procedure was obtained the window size for the use of the median filter. This phase was very important because it helps to improve the segmentation (Huang et al.). The best median filter performance was achieved with a window of 7 x 7 pixels after a trial-and-error test; so, all the images were preprocessed using this window size. It facilitated that all images segmented well. Histograms from all images were analysed previously to the segmentation step. The red channel cannot be used for segmentation because its histograms were not bimodal. It means the intensity of the region of interest is overlapped with the intensity of the background (Chen et al., 2003), as we can see in the Figure 2 upperleft panel. For the green and blue channels (Figure 2 upper-right and lower-right panels, respectively), the histograms were well defined (useful region and background were well identified). Particularly, green channel was the best, according to opinion of three experts. For this channel, the peaks were more separated and the ratio between the number of pixels corresponding to the lesion compared to the background was the higest. It means that image contrast between the lesion and the background was the best. So, it was the channel finally selected for the analysis of any suspeciuslesion with the proposed method. Image intensity components and the number of pixels which contains each intensity value can be appreciated in the Figure 2 lower-right panel.

Fig.2. Histograms for an image corresponding to channels red (upper-left), green (upper-right) and blue (bottom-left). Bottom-right panel represents the intensity image.

The whole details of a lesion segmentation by Otsu’s, following the procedure described above, can be observed in Fig. 3. Over the last part of the segmentation procedure, each lesion was divided in 4 parts (upper, down, left and right), using the calculated centroids of the 150 x 150 pixels bounding box (Fig. 3).

Fig. 3. Segmentation, bounding box insertion and lesion’s splitting.

In the next step, the 13 features were calculated for each lesion. For example: Standard deviations were calculated by parts (sdu, sdd, sdr and sdl, respectively). The ratio among the standard deviations were obtained (sdud for the vertical axis and sdlr for the horizontal axis, with Equations 5 and 4, respectively). A median of 4.8 % was obtained for sdud. The 34.3 % of the benign lesion surpassed this value and the 65.7 % of the malignant. In the case of the vertical axis, the median was 5.9 % and it was surpassed by the 62.9 % of the malignant lesions and 40 % of the benign lesions. These results were expected because the higher standard deviation implies the higher probability to have a malignant lesion.
Another example of feature calculation is the medians for the lesions in the 4 parts (mu, md, mr, and ml). These values were used to calculate the ratios among medians for each image (mud for the vertical axis) and (mlr for the horizontal axis, with Equations 6 and 7, respectively). In the first case, a median of 8.0% was obtained and it was surpassed by the 40 % of the benign lesions and 60 % of the malignant lesion. In the vertical axis, we obtained similar values. The higher median between the two evaluated parts, implies the lower symmetry of the lesion, that is, the higher probability to have melanoma. Similar procedure and analysis were followed for the areas in the four parts (Equations 8 and 9), as well as the rest of the features.
In the last step of the proposed method, the feature’s values of each image have been used as the input of the neural network. The classifier assigned a weighting factor to each feature. These weighting factors were adjusted over the training phase and evaluated during the validation and test.
The results of the network evaluation is presented in Figure 4. The values of the mean square error for the three data subset (training, validation and test) and the error gradient for the training data are shown to be below 0.2.

Fig. 4. Performance of the neural network.

Nineteen iterations were developed to obtain the convergence of the neural network. The justification of it is that after epoch 13, the results of the mean square errors were almost constant and minimum for the three data (Song et al.). Furthermore, the error gradient was the minimum at epoch 13 during the training.

Finally, the results for the data test showed 91.4 % of correct classification. Only six images were wrongly classified (1 malign and 5 benign, i.e. FN = 1, FP = 5). Table I shows the results according to the classical performance measures, i.e. sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (Taboada-Crispi et al., 2009).

Table I. Results obtained with Melanoprot.

DISCUSSION


These results are fairly good. The sensitivity, detection rate, or recall, which is the probability of detecting melanoma correctly [Se = 100TP / (TP+FN)], was as high as 96.8 %. The specificity, which is the probability of detecting non-melanoma correctly [Sp = 100TN / (TN+FP)], was acceptable (87.2 %). In our opinion, this parameter can be still improved if the neural network is trained with a larger database to avoid a possible “overfitting”.
On the other hand, positive predictive value, precision, or Bayesian detection rate, which is the probability of having melanoma correctly detected [Pp = 100TP / (TP+FP)], was affected with the presence of 5 false positives, but the result is still acceptable (85.7 %). It means that the system tries to detect more positive cases than negative with the used data. Anyway, the riskier case (the prediction of false negative) was very low. The negative predictive value, or Bayesian negative rate, which is the probability of having non-melanoma correctly classified, according to Pn = 100TN / (TN+FN), was 97.1 %.
In general, we consider the value of 91.4 % of classification rate, accuracy, or simple matching coefficient, which is the rate of correct classifications [Cc = 100 (TP+TN) / (TP+TN+FP+FN)], as a good result. The proposal of the 13 input features can be considered as adequate for the correct performance of the neural network designed as classifier in the proposed method.
There are some other possibilities to improve the method performance. The use of a standardized image acquisition, using a device with a rigid structure would help to keep constant the distance between the mobile camera and the subject skin. It is also useful to offer a grid reference for the manual placed of the camera and for the automatic cut out and centering of the image.
A proper comparison of this new method with other methods as ABCD rule, Menzie’s or the 7-point list have not been done because, unfortunately, we do not have the same databases used in those previous works. This is the main limitation of the present work. For example, in (Amaliah), it was used the classical ABCD rule for melanoma diagnosis. The authors used 30 images. The percentage of correct classification was 85 %. They obtained 4 cases with bad classification. That method only graded 4 features in a qualitative scale. Theoretically, 9 of our cases would be bad classified using the ABCD rule. In comparison, the proposed method has a more robust analysis using also asymmetry, dimension and non-uniformities in color and borders, but they are included in a mathematical formalism of 13 features, which grades the image with more details and improve the classification results.
A similar approach to the present work was presented in (Abuzaghleh et al.) using other features and steps. Those authors also obtained percentages of good classification higher than 90 % when the subjective method ABCD was improved. They also programmed mathematical equations derived from the original four features of the ABCD rule.
The method presented is simple and it has an easy and fast implementation. The computation took just 3 seconds after the image is charged on the computer used. The proposal would not substitute the posterior professional evaluation, which should be done by specialists in order to manage correctly the patient with a real diagnosis of melanoma. Particularly, this type of tool using e-Health is very useful for the developing world (Camacho et al., 2016).

CONCLUSIONS

The proposed method to detect suspicious lesion of melanoma was successful for the experimental conditions. The green channel was verified as the best for the algorithm and a windows of 7 x 7 pixels for the median filter during the preprocessing step was able to guaranty the posterior segmentation using the Otsu’s algorithm. The 13 features implemented on MATLAB to introduce its results in a neural network of 5 neurons in the hide layer and 2 in the output were able to produce a good performance of the method. Finally, more than 90% of good classification in melanoma and nomelanoma were obtained with our proposal.

ACKNOWLEDGEMENTS

The authors would like to thank the “Health Technology Task Group (HTTG)”, from the IUPESM, especially Prof. Cari Borras, for encouraging this paper.

ROJAS, J. R.; PÉREZ, M. D.; TOBOADA, A. C. Método para análisis de lesiones de piel sospechosas de ser un melanoma. J. health med. sci., 4(2):115-122, 2018.
RESUMEN: La visión computacional está siendo muy utilizada para aplicaciones biomédicas, dada las potencialidades de cómputo rápido y preciso de las actuales computadoras y dispositivos móviles. En el presente trabajo se propone un método para la detección rápida de lesiones sospechosas de ser un melanoma. Se tomaron imágenes de lesiones en piel en formato JPEG de 150 x 150 píxeles: 35 de melanoma y 35 de no melanoma, de dos bases de datos anotadas. Se les aplicó un filtro de mediana de 7 x 7 píxeles al canal verde y se segmentaron con el método de Otsu, para usarlas como referencia en la extracción de rasgos inspirados en el método ABCD (Asimetría, Bordes, Color y Dimensiones). Los 13 rasgos implementados fueron: perímetro y área de la lesión, su cociente, perímetro y área de la caja contenedora, varianza y mediana de la lesión, así como las razones de las desviaciones típicas, las medianas y las áreas, de la parte derecha vs. la izquierda y la superior vs. la inferior. Estos rasgos se probaron alimentando una red neuronal de 5 neuronas en la capa oculta y 2 en la de salida, que luego de entrenarse y validarse, obtuvo porcentajes de exactitud, sensibilidad, especificidad y predictividad (positiva y negativa) similares o superiores a los reportados por métodos previos, con porcentajes de clasificación correcta por encima del 90 %.

PALABRAS CLAVE: visión computacional, detección de melanoma, procesamiento digital de imágenes, red neuronal.

REFERENCIAS BIBLIOGRÁFICAS

Azorín, C. Simulaciones de la interacción de fotones en la materia usando el Método Monte Carlo. Tesis del Centro de Investigación en Ciencia Avanzada y Tecnología Aplicada-IPN, 2009. Baeza, M.; Calzado, A.; Morán, P.; Morán, L. M. & Rodríguez, R. Estimación de las dosis de tomografía computarizada en cinco centros para indicaciones frecuentes en cinco áreas anatómicas. Rev. Fis. Med., 4:7-17, 2003. Correira, S. I. K. Estudo por Monte Carlo de espectros de raios X de radiodiagnóstico para aplicações na Física Medica. Tesis para a obtenção do título de Mestre em Física. Universidade Estadual de Santa Cruz, Ilhéus Bahia, Brasil, 2011. Hayati, H.; Mesbahi, A. & Nazarpoor, M. Monte Carlo modeling of a conventional X-ray compuetd tomography scanner for gel dosimetry purposes. Radiol. Phys. Technol., 9(1):37-43, 2016. International Comission on Radiological Protection (ICRP). Conversion coefficients for use in radiological protection against external radiation. Annals of the ICRP 26 (3-4), 1996. Disponible en: http://www.icrp.org/ publication.asp?id=ICRP%20Publication%2074. Mohammadi, F.; Riyahi, A. N.; Geraily, G & Paydar, R. Thorax organ dose estimation in computed tomography based on patient CT data using Monte Carlo simulation. Int. J. Radiat. Res., 14(4):31321, 2016. National Institute of Standars and Technology (NIST). X-ray transition energies database. 2018. Disponible en: https://physics.nist.gov/ PhysRefData/XrayTrans/Html/search.html. Simões, S. J. I. Simulações Monte Carlo da redução da dose no cristalino e na tiroide em exames de Tomografia Computorizada utilizando protecçoes de bismuto. Tesis para obtenção do Grau de Mestre em Engenharia Biomédica. Universidad Nova de Lisboa, Lisboa, Portugal, 2013. Vega-Carrillo, H. Introducción al Método Monte Carlo y al código MCNP. México: Universidad Autónoma de Zacatecas, 2017. X-5 Monte Carlo Team. A General Monte Carlo N-Particle Transport Code, Version 5., 2003. Disponible en: https:// www.nucleonica.com/wiki/images/8/89/mcnpvoli.pdf.Da Silva, I. N.; Spatti, D. H.; Flauzino, R. A.; Liboni, L. B. & dos Reis Alves, S. F. Artificial neural network architectures and training processes. In. Artificial Neural Networks. A practical Couse. Switzerland, Springer, 2017. Esteva, A.; Kuprel, B.; Novoa, R. A.; Ko, J.; Swetter, S. M.; Blau, H. M. & Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(1):115-8, 2017. Gutman, D.; Codella, N.; Celebi, M. E.; Helba, B.; Marchetti, M.; Mishra, N. K. & Halpern, A. Skin lesion analysis toward melanoma detection: a challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1605.01397, 2016. Huang, T.; Yang, G. & Tang, G. A fast two-dimensional median filtering algorithm. IEEE Trans. Acoust., Speech, Signal Processing, 27(1):13-8, 1079. Karabulut, E. M. & Ibrikci, T. Texture analysis of melanoma images for computer-aided diagnosis. In Annual International Conference on Intelligent Computing, Computer Science and Information Systems. Pattaya, Thailand, 2016, Otsu, N. A threshold selection method from gray-level histograms. IEEE Transaction on Systems, Man, and, Cybernetic, 9(1):62-6, 1979. Pradipnaswale, P. & Ajmire, P. E. Image classification techniques: a survey. IJETTCS, 5(2):236-9, 2016. Saravanan, K. & Sasithra, S. Review on classification based on artificial neural networks. IJASA, 2(4):11-8, 2014. Shetty, P. & Turkar, V. Melanoma decision support system for dermatologist. IJCA Proceedings on International Conference on Recent Trends in Information Technology and Computer Science (ICRTITCS-2011) icrtitcs(2):28-30, 2012. Song, Y.; Schwing, A. G.; Zemel, R. S.; Urtasun, R. Training deep neural networks via direct loss minimization. arXiv:1511.06411 [cs.LG], 2016. Taboada-Crispi, A.; Sahli , H.; Orozco, O. M.; Hernández, P. D. & Falcón, R. A. Anomaly detection in medical image analysis. In. Exarchos, T. P.; Papadopoulos, A. & Fotiadis, D. Handbook of research on advanced techniques in diagnostic imaging and biomedical applications. New York, Medical Information Science Reference, 2009. Tyagi, A.; Miller, K.; Cockburn, M. E-Health Tools for targeting and improving melanoma screening: a review. J. Skin Cancer, 2012:437502, 2012. Will, K. Computer vision and the future of mobile devices. Tech Republic. 2014. Available in: http://www.techrepublic.com/article/ computer-vision-and-the-future-of-mobile-devices/

 

Correspondende to:

 

Marlen Pérez Díaz

Universidad Central

Marta Abreu de las Villas Carretera a Camajuaní km 5 1⁄2 Santa Clara 54830

CUBA


E-mail: mperez@uclv.edu.cu

Donde:

t1 y t2 son los espesores de los filtros utilizados en mm.

M0 es el promedio de las lecturas sin filtros.

M1 y M2 son las lecturas con los filtros t1 y t2.

El valor del coeficiente de homogeneidad (h: homogeneity coefficient) fue calculado con la siguiente ecuación (IEC, 2005; IAEA, 2007):

Comunicate con nosotros