دانشگاه صنعتی نوشیروانی بابلمجله علمی رایانش نرم و فناوری اطلاعات2383-10069120200521Image De-noising Based on a Local Optimal Balance between Data Fidelity and Output SmoothnessImage De-noising Based on a Local Optimal Balance between Data Fidelity and Output Smoothness1661697FAArefeh KhanlariBabol Noshirvani University of TechnologyMehdi EzojiBabol Noshirvani University of TechnologyJournal Article20160216This paper addresses image denoising problem based on minimization of an appropriate energy function. This energy function consists of data fidelity term and targeted smoothness term. In this paper, a local optimal balance between these two terms is considered. This strategy leads to image invariant denoising and also preserves edges simultaneously. Experimental results verify the performance of this approach.This paper addresses image denoising problem based on minimization of an appropriate energy function. This energy function consists of data fidelity term and targeted smoothness term. In this paper, a local optimal balance between these two terms is considered. This strategy leads to image invariant denoising and also preserves edges simultaneously. Experimental results verify the performance of this approach.https://jscit.nit.ac.ir/article_61697_eb7666b241e54acb3803339081d2ad7b.pdfدانشگاه صنعتی نوشیروانی بابلمجله علمی رایانش نرم و فناوری اطلاعات2383-10069120200521To Move or Not to Move: An Iterative Four-Phase Cloud Adoption Decision Model for IT Outsourcing Based On TCOTo Move or Not to Move: An Iterative Four-Phase Cloud Adoption Decision Model for IT Outsourcing Based On TCO71793088FAMirsaeid Hosseini ShirvaniDepartment of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran0000-0001-9396-5765Journal Article20170701Information technology outsourcing (ITO) of organizations on cloud datacenter promises cost effectiveness over traditional on-premises deployment. In this regard, making sustainable decision toward cloud adoption needs profound understanding of cost implications, and social environmental issues. There are several concerns and challenges policymakers face when they encounter with IT options’ dilemma. Although the cloud migration has potentially merits account for reduction in total cost of ownership (TCO), there may exist demerits for especial situation of each industry and organization such as degree of uncertainty on privacy, security and communication delay concerns. This paper introduces an iterative four-phase cloud adoption decision model for IT outsourcing to solve industries’ and organizations’ concerns and challenges by considering cost implication of each contingent options and applying the net present value (NPV) of each alternative during the period of investment along with non-economic issues analysis. Also, the model leverages Moore law and Delphi panelists’ interview to estimate price of IT devices in future and to weight cloud adoption determinants and inhibitors respectively. The new services of Telecommunication Company of Mazandaran province (TCM) which is a large scale industry in IRAN are used as a case study to evaluate the effectiveness of proposed model for six years of investment. Implementation of the model for TCM shows that it is better to establish private datacenter on-premises and apply hybrid deployment in burst of resource demand.Information technology outsourcing (ITO) of organizations on cloud datacenter promises cost effectiveness over traditional on-premises deployment. In this regard, making sustainable decision toward cloud adoption needs profound understanding of cost implications, and social environmental issues. There are several concerns and challenges policymakers face when they encounter with IT options’ dilemma. Although the cloud migration has potentially merits account for reduction in total cost of ownership (TCO), there may exist demerits for especial situation of each industry and organization such as degree of uncertainty on privacy, security and communication delay concerns. This paper introduces an iterative four-phase cloud adoption decision model for IT outsourcing to solve industries’ and organizations’ concerns and challenges by considering cost implication of each contingent options and applying the net present value (NPV) of each alternative during the period of investment along with non-economic issues analysis. Also, the model leverages Moore law and Delphi panelists’ interview to estimate price of IT devices in future and to weight cloud adoption determinants and inhibitors respectively. The new services of Telecommunication Company of Mazandaran province (TCM) which is a large scale industry in IRAN are used as a case study to evaluate the effectiveness of proposed model for six years of investment. Implementation of the model for TCM shows that it is better to establish private datacenter on-premises and apply hybrid deployment in burst of resource demand.https://jscit.nit.ac.ir/article_93088_e91f3e2928635ff1e0bb8c8a9ec46d44.pdfدانشگاه صنعتی نوشیروانی بابلمجله علمی رایانش نرم و فناوری اطلاعات2383-10069120200521A Speech Act Classifier for Persian Texts and its Application in Identifying RumorsA Speech Act Classifier for Persian Texts and its Application in Identifying Rumors1827103557FAZoleikha Jahanbakhsh-NagadehDepartment of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran.Mohammad-Reza Feizi-DerakhshiDepartment of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Iran.0000-0002-8548-976XArash SharifiDepartment of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran.0000-0002-2441-9477Journal Article20190212Speech Acts (SAs) are one of the important areas of pragmatics, which give us a better understanding of the state of mind of the people and convey an intended language function. Knowledge of the SA of a text can be helpful in analyzing that text in natural language processing applications. This study presents a dictionary-based statistical technique for Persian SA recognition. The proposed technique classifies a text into seven classes of SA based on four criteria: lexical, syntactic, semantic, and surface features. WordNet as the tool for extracting synonym and enriching features dictionary is utilized. To evaluate the proposed technique, we utilized four classification methods including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB), and K-Nearest Neighbors (KNN). The experimental results demonstrate that the proposed method using RF and SVM as the best classifiers achieved a state-of-the-art performance with an accuracy of 0.95 for classification of Persian SAs. Our original vision of this work is introducing an application of SA recognition on social media content, especially identifying the common SA in rumors and its application in the rumor detection. Therefore, the proposed system utilized to determine the common SAs in rumors. The results showed that Persian rumors are often expressed in three SA classes including narrative, question, and threat, and in some cases with the request SA. Also, the evaluation results indicate that SA as a distinctive feature between rumors and non-rumors improves the accuracy of rumor identification from 0.762 (based on common context features) to 0.791 (the combination of common context features and four SA classes).Speech Acts (SAs) are one of the important areas of pragmatics, which give us a better understanding of the state of mind of the people and convey an intended language function. Knowledge of the SA of a text can be helpful in analyzing that text in natural language processing applications. This study presents a dictionary-based statistical technique for Persian SA recognition. The proposed technique classifies a text into seven classes of SA based on four criteria: lexical, syntactic, semantic, and surface features. WordNet as the tool for extracting synonym and enriching features dictionary is utilized. To evaluate the proposed technique, we utilized four classification methods including Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB), and K-Nearest Neighbors (KNN). The experimental results demonstrate that the proposed method using RF and SVM as the best classifiers achieved a state-of-the-art performance with an accuracy of 0.95 for classification of Persian SAs. Our original vision of this work is introducing an application of SA recognition on social media content, especially identifying the common SA in rumors and its application in the rumor detection. Therefore, the proposed system utilized to determine the common SAs in rumors. The results showed that Persian rumors are often expressed in three SA classes including narrative, question, and threat, and in some cases with the request SA. Also, the evaluation results indicate that SA as a distinctive feature between rumors and non-rumors improves the accuracy of rumor identification from 0.762 (based on common context features) to 0.791 (the combination of common context features and four SA classes).https://jscit.nit.ac.ir/article_103557_39ee4141ca88e2610c15237386cdb480.pdfدانشگاه صنعتی نوشیروانی بابلمجله علمی رایانش نرم و فناوری اطلاعات2383-10069120200521EGA: An Enhanced Genetic Algorithm for Numerical Functions OptimizationEGA: An Enhanced Genetic Algorithm for Numerical Functions Optimization2835103663FAAsadollah ShahbahramiDepartment of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.0000-0002-5195-1688Kiumars GhazipourDepartment of Computer Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran.Journal Article20181201Optimization is the process of making something as good or effective as possible. Optimization problems are used over many fields such as economics, science, industry and engineering. The growing use of optimization makes it essential for researchers in every branch of science and technology. To solve optimization problems many algorithms have been introduced, while achieving a higher quality of results in terms of accuracy and robustness is still an issue. Metaheuristics are widely recognized as efficient approaches for many hard optimization problems. In this study, to achieve a higher quality of results in numerical functions optimization, two new operators named N-digit lock search (NLS) and Two-Math crossover are introduced to enhance the genetic algorithm (GA) as a widely used metaheuristic. The NLS operator is inspired by the N-digit combination lock pattern and enhances the exploitative behavior of the GA by calibrating the current best solution and the relatively new Two-Math crossover operator combines both two-point and arithmetic crossover techniques to guide the overall search process better. The proposed enhanced genetic algorithm (EGA) is tested over 33 benchmark mathematical functions and the results are compared to some population-based, particle swarm optimization (PSO2011) and artificial bee colony (ABC) algorithms, and single-solution based, simulated annealing (SA), pattern search (PS), and vortex search (VS). A problem-based test is performed to compare the performance of the algorithms, which results shows the proposed EGA outperforms all other algorithms, SA, PS, VS, PSO2011 and ABC. In addition, it surprisingly finds the global best points for almost all 33 test functions with a constant value for 2 out of 3 EGA operators.Optimization is the process of making something as good or effective as possible. Optimization problems are used over many fields such as economics, science, industry and engineering. The growing use of optimization makes it essential for researchers in every branch of science and technology. To solve optimization problems many algorithms have been introduced, while achieving a higher quality of results in terms of accuracy and robustness is still an issue. Metaheuristics are widely recognized as efficient approaches for many hard optimization problems. In this study, to achieve a higher quality of results in numerical functions optimization, two new operators named N-digit lock search (NLS) and Two-Math crossover are introduced to enhance the genetic algorithm (GA) as a widely used metaheuristic. The NLS operator is inspired by the N-digit combination lock pattern and enhances the exploitative behavior of the GA by calibrating the current best solution and the relatively new Two-Math crossover operator combines both two-point and arithmetic crossover techniques to guide the overall search process better. The proposed enhanced genetic algorithm (EGA) is tested over 33 benchmark mathematical functions and the results are compared to some population-based, particle swarm optimization (PSO2011) and artificial bee colony (ABC) algorithms, and single-solution based, simulated annealing (SA), pattern search (PS), and vortex search (VS). A problem-based test is performed to compare the performance of the algorithms, which results shows the proposed EGA outperforms all other algorithms, SA, PS, VS, PSO2011 and ABC. In addition, it surprisingly finds the global best points for almost all 33 test functions with a constant value for 2 out of 3 EGA operators.https://jscit.nit.ac.ir/article_103663_9360fd8481f1a8130f3d348fefd4285d.pdfدانشگاه صنعتی نوشیروانی بابلمجله علمی رایانش نرم و فناوری اطلاعات2383-10069120200521Estimation of Ear Parameters Applicable to Otoplasty SurgeryEstimation of Ear Parameters Applicable to Otoplasty Surgery3645104387FAAli Fahmi JafargholkhanlooFaculty of Biomedical Engineering, Sahand University of TechnologyMousa ShamsiFaculty of Biomedical Engineering, Sahand University of TechnologyJournal Article20190511Analysis of facial color images is very important as a result of its numerous applications in facial surgeries. The development of different tools in the field of facial surgery analysis has helped surgeons before and after surgery. In this article, an Active Contour Model (ACM) based on Local Gaussian Distribution Fitting (LGDF) is introduced for the contour extraction of the ear area. The LGDF model is a region-based method in the Active Contour Model that unlike other models such as the Chan-Vese is not sensitive to intensity inhomogeneity of the image. After the contour extraction of the ear area, in the second step, with the four landmarks detection, ear parameters containing: length, width and external angle of ear were measured for analysis in the Otoplasty surgery. The proposed algorithm was evaluated on the AMI and Sahand University of Technology (SUT) databases. The proposed algorithm has an accuracy of %96.432, %97.423 and %85.546 in the AMI database and an accuracy of %98.381, %97.237 and %87.864 in the SUT database for the length, width and external angle of the ear parameters, respectively.Analysis of facial color images is very important as a result of its numerous applications in facial surgeries. The development of different tools in the field of facial surgery analysis has helped surgeons before and after surgery. In this article, an Active Contour Model (ACM) based on Local Gaussian Distribution Fitting (LGDF) is introduced for the contour extraction of the ear area. The LGDF model is a region-based method in the Active Contour Model that unlike other models such as the Chan-Vese is not sensitive to intensity inhomogeneity of the image. After the contour extraction of the ear area, in the second step, with the four landmarks detection, ear parameters containing: length, width and external angle of ear were measured for analysis in the Otoplasty surgery. The proposed algorithm was evaluated on the AMI and Sahand University of Technology (SUT) databases. The proposed algorithm has an accuracy of %96.432, %97.423 and %85.546 in the AMI database and an accuracy of %98.381, %97.237 and %87.864 in the SUT database for the length, width and external angle of the ear parameters, respectively.https://jscit.nit.ac.ir/article_104387_b189e04d440ee6c282400104038dbc1d.pdfدانشگاه صنعتی نوشیروانی بابلمجله علمی رایانش نرم و فناوری اطلاعات2383-10069120200521Script-Independent Handwritten Text line Segmentation Using Directional 2D FiltersScript-Independent Handwritten Text line Segmentation Using Directional 2D Filters4660107021FAMajid ZiaratbanDepartment of Electrical Engineering, Faculty of Engineering, Golestan University, Gorgan, Iran.0000-0003-4560-4759Journal Article20190407Text line segmentation is an important stage of the optical character recognition (OCR) algorithms. To analyze and recognize a document, text lines have to be segmented accurately. Text line segmentation of handwritten documents is more difficult than that of machine-printed ones. Curved and multi-skewed text lines, overlapping text lines, and very small text lines are the main challenges. Most of the proposed approaches did not consider local features of text lines in a document image. In our proposed method, both global and local features are considered. The proposed method is based on using directional 2D anisotropic filters. The parameters of our method are tuned based on a main global parameter which is computed for each document, separately. Hence, the proposed method is a dataset-independent method. A document is divided into several blocks for which some local characteristics are calculated. In each block, text regions are detected by using local characteristics such as the block skew. In order to estimate the skew of text regions in a block, a novel text block skew estimation algorithm is proposed in this paper. Experimental results show that the proposed method outperforms all the state-of-the-art methods on three standard datasets. Our final F-Measure are 0.54%, 0.03%, and 0.02% greater than the winner of ICDAR2013 text line segmentation contests on ICDAR2013, ICDAR09, and HIT-MW datasets, respectively. The experiments proved that the proposed method can accurately segment text lines of complicated handwritings.Text line segmentation is an important stage of the optical character recognition (OCR) algorithms. To analyze and recognize a document, text lines have to be segmented accurately. Text line segmentation of handwritten documents is more difficult than that of machine-printed ones. Curved and multi-skewed text lines, overlapping text lines, and very small text lines are the main challenges. Most of the proposed approaches did not consider local features of text lines in a document image. In our proposed method, both global and local features are considered. The proposed method is based on using directional 2D anisotropic filters. The parameters of our method are tuned based on a main global parameter which is computed for each document, separately. Hence, the proposed method is a dataset-independent method. A document is divided into several blocks for which some local characteristics are calculated. In each block, text regions are detected by using local characteristics such as the block skew. In order to estimate the skew of text regions in a block, a novel text block skew estimation algorithm is proposed in this paper. Experimental results show that the proposed method outperforms all the state-of-the-art methods on three standard datasets. Our final F-Measure are 0.54%, 0.03%, and 0.02% greater than the winner of ICDAR2013 text line segmentation contests on ICDAR2013, ICDAR09, and HIT-MW datasets, respectively. The experiments proved that the proposed method can accurately segment text lines of complicated handwritings.https://jscit.nit.ac.ir/article_107021_247c922a63e9e6b7f279c88d33bf1bbf.pdf