10 - Ivan, Ciurea, Vintila, Nosca

download 10 - Ivan, Ciurea, Vintila, Nosca

of 18

Transcript of 10 - Ivan, Ciurea, Vintila, Nosca

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    1/18

    Informatica Economicvol. 17, no. 1/2013 DOI: 10.12948/issn14531305/17.1.2013.10 113

    Particularities of Verification Processes for Distributed InformaticsApplications

    Ion IVAN1, Cristian CIUREA1, Bogdan VINTIL2, Gheorghe NOCA31Department of Economic Informatics and Cybernetics, Academy of Economic Studies,

    Bucharest, Romania,2Ixia, Bucharest, Romania

    3Association for Development through Science and Education, Bucharest, [email protected], [email protected], [email protected],[email protected]

    This paper presents distributed informatics applications and characteristics of their

    development cycle. It defines the concept of verification and there are identified the differences

    from software testing. Particularities of the software testing and software verification

    processes are described. The verification steps and necessary conditions are presented and

    there are established influence factors of quality verification. Software optimality verification isanalyzed and some metrics are defined for the verification process.

    Keywords: Distributed Informatics Applications, Software Testing, Software Verification,

    Verification Process, Software Optimality

    Characteristics of DistributedInformatics Applications

    Distributed informatics applications aresoftware constructions that are based onarchitectures whose components, throughinteraction, realize allocations of resources in

    real time. Distributed informaticsapplications include:- a heterogeneous group of users, having

    many elements that, through interaction,solve their well-defined problems, as dataentry volume, sequence operations thatactivates and with concrete results thatmarks the success of performing theinteraction or the need to retake somecomponents from the operations chainspecifying the cause and the manner of

    disposal: after a few replays each usersuccessfully completes the interactiongetting the message meaning that his

    problem has been solved correctly andcompletely;

    - dynamic definition of computer networkthrough which is realized the messagestransfer of user exempted in the groundindefinitely, the only restrictions beingthose related to hardware resources thatensure compatibility with data acquisitionsystem and connection performance;

    - conducting an achievements cycle thatincludes steps, such as: defining the target

    group and setting its size; defining the setof distinct problems, which are subject to

    processing; in this respect for eachproblem are used appropriate definitiontools to reduce the risk of sub-definitionsor supra-definitions, situations attracting

    reversals of prior steps when minuses ofinformation in the case of sub-definitions,or the excess of information, in the case ofsupra-definition of the problem, generateeffects that lead to discontinuation of thedevelopment cycle with the impossibilityto pass to the next stage; the stage of clearspecifications, consistent, thedevelopment stage of informaticssolutions variants accompanied by

    performance estimation models requires

    choosing the suitable variant against thecriterion with which the multiplicationeffects is managed at the moment ofimplementation; the code elaborationstage as optimal resource allocation

    process, knowing that the instructions,data structures defining mechanism and

    building of sequences procedures must beunderstood as infinite resource use, butwhich differ from each other in terms of

    performance criteria of informaticsapplications, taken as a whole; the testingstage [1] play a very special role for thedistributed informatics applications

    1

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    2/18

    114 Informatica Economicvol. 17, no. 1/2013

    because these operates independently ofthe developer; the user has limited

    possibilities to manage uncontrolledsituations resulting from existing errors in

    procedures that allocate resources wronglyor generates random processing behavior,which ultimately creates users discomfort(elaboration, documentation,implementation).

    Modern distributed informatics applicationsare investments, so they include:- the investor who pays the development of

    the distributed informatics application;- staff providing application development;- staff ensuring exploitation management;- users that solves the problem with the

    distributed informatics application, whichbecomes service provider;

    - from the amounts transferred by the usersto other destinations, a part returns to theinvestor, a part to the management of theapplication and the investment isrecovered, and those who ensure themanagement have their profit.

    Modern distributed informatics applicationshave into the user a beneficiary of theoptions, database, there are investors whorecover the investment by the fact that users

    beneficiates of services: for examplebooking.com. It is considered that there is aninvestor. In the database the hotels placedetails regarding the number of rooms,

    prices, photos, etc. In the database are loyalusers and new users after their firsttransaction. The hotel owners and thecustomers will beneficiate by the services of

    the website. The customer pays at the hoteland an amount x% go to the website or to theinvestor, the hotelier pays only to appearthere and to be hosted on the website.The informatics system from a bank iscollaborative because it has a large numberof components, a large variety of links

    between them and requires a high level ofconnectivity and integrability [2].The components of banking informaticssystem are distributed applications that

    communicate with each other and areintegrated into a whole. Over the time, bankshave improved their informatics systems byincreasing the integrability degree of their

    components applications.Another indicator that banks are seeking isthe portability degree of informaticsapplications from the bank, according to a

    bank can migrate its informatics system fromone work environment to another, especiallyto fulfill the disaster recovery procedures.A distributed informatics application that isused in a bank is the CollaborativeServicedesk application, which allowsanalyzing the types of problems reported by

    internet banking users. Having the databasewith all customer requests, the bankdetermines the strategies to address eachclient, depending on the history of problemsthat he encountered.The Collaborative Servicedesk applicationadapts to input data and modifies itscomponents so as to provide maximumutility and customer support regardless ofcategory they belong.The verification process of CollaborativeServicedesk application is different by thetesting process, because it requiresunderstanding the problem, discussions withanalysts, according with the objectivesestablished at the applicationimplementation.The application testing involves performing a

    battery of tests to ensure the accuracy of theinformation recorded, and validate theoperation manner of the application.

    Verification testing process involves:- verification that all tests proposed havebeen run;

    - verification that the tests realized cover allor a part of the problem;

    - verification that the test report correctlyreflects what happened with theapplication.

    Figure 1 shows the stages of verificationtesting process for the distributed informaticsapplication:

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    3/18

    Informatica Economicvol. 17, no. 1/2013 115

    Fig. 1.Verification stages of distributed informatics applications

    In the current use stage of the application, theverification process includes the followingelements:- verification whether the access to

    resources was defined for all enrolledusers; if there are n real users and menrolled users in the application, then ifn=m is fine, if n>m means that not all

    users were enrolled, and if n

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    4/18

    116 Informatica Economicvol. 17, no. 1/2013

    Due to a vast set of factors, the softwaretesting process must adapt and shorten itsduration. Should all possible tests be run on asoftware application, even for a very small

    one, the possibilities are almost unlimitedand, thus, the duration of the process isextremely large. Budget and time are themain factors of limiting the software testingto only a very small part of the totalexecutable test cases.The quality of the tested software product isgiven on a large basis by the experience andexpertise of the QA team that creates,executes and evaluates the test cases. Testcases are supposed to cover all execution

    paths and for this they must cover all GUIoptions as well as supplying different valuesin different fields as these influence thecalculations underneath. Should aninexperienced QA team create test cases thatcover only at a small degree the functionalityof the tested software, the problems will ariseat the customer site and the experience will

    be bad.The execution of the test cases is almost asimportant as the test cases themselves. Evenif the test cases cover the largest part of theapplications functionality and the samplevalues are good enough to catch most of the

    problems, if run by inexperienced people theresults of the process will be unreliable. Inthe execution phase the QA member mustalso pay attention to other factors that mightinfluence the execution of the applicationsuch as the Internet connectivity and

    previous actions. Some issues in software

    programs occur only after a very long andcomplex set of actions and the QA memberexecuting the test cases must be attentiveenough to remember all actions he did whena problem arises in order to be able tosuccessfully and quickly reproduce anddocument it.Test cases and their execution lead to results.These are the data provided by the software

    program under test after the processing of thedata input. In order for an application to have

    high quality the results must be correct,complete and consistent.

    The correctness of the results is given by thefact that actual results for some data inputmatches or not the expected results. As QAteam knows the applications functions, they

    also can predict the results for a certain datainput set. If the actual and expected resultsdont match there are some situations thatmight have caused this:- the test case was poorly designed and the

    input data set is not valid for the expectedresults; this might be due to modifying theinput data set after manually computingthe results, or an error in the manualcomputation;

    - the test case was executed poorly meaningthat not all settings were done or not allvalues were inserted correctly; this is theresult of inexperienced team members orenvironment issues such as stress;

    - the functionality of the application in thetested area changed and the QA team isnot aware of it; this might happen becausethe changes in functionality are very newand there was not enough time for theinformation to propagate towards the QAteam or because of poor communication

    between the development and QA teams;- there is a problem in the applications

    functionality; in this case the QA teammust raise the problem and document itwell so that the development team canreproduce it easily and be able to test thefix once it is done.

    The completeness of the results means thatexactly the expected results will be supplied,no more, no less. If more results than the

    expected ones are supplied, there might be aproblem, a setting might not have been doneor a value of the test case might have beeninserted wrongly. The case of missing resultsis similar, but adds something more. Whenincomplete results appear something mighthave happened that halted the applicationsexecution at some point and there is likely awarning or exception thrown about the fact.When an warning is shown to the user itnotifies him that some of the input data might

    cause problems, but the execution willcontinue. An exception notifies the user that

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    5/18

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    6/18

    118 Informatica Economicvol. 17, no. 1/2013

    changed or supplemented by an algorithmthat can compute those values.

    When performing verification at this stage ofthe development cycle, a keen eye must be

    kept open to see if the defined problem isreally the one the users experience. A poordefining of the problem leads only to wasteof resources.The target group definition stage of thedevelopment cycle identifies the generic setof user that the software product addresses.Defining the target group means consideringcriteria such as territory, age, education level,access frequency, etc. Some of these criteriaare general regardless the software product

    that is developed, such as the accessfrequency, but most of them depend on thenature of the software. For this stageverification consists in:- checking that no significant part of the

    future users have been left aside; smallomissions of user categories areacceptable as long as their share in thetotal is under few percents, but largeromissions means that the software is notdesigned to fit all the users and, thus, is

    prone to losing market share;- checking that groups with a very low

    chance of becoming actual users have notbeen included in the target group; ascitizen oriented informatics applicationsmust be designed so that all users from thetarget, regardless their education andexperience can use the application without

    prior training, including groups with avery small chance of becoming real users

    in the target group just adds extra weightto the process of design and developmentdue to extra restrictions these groups add;also, the benefits brought in by such smallgroups are outnumbered by the costs ofthe extra resources needed in thedevelopment cycle;

    - check that all significant criteria has beenintroduced in the characterization of thetarget group; the criteria that are used tocharacterize the target group determine

    the characteristics and particularities ofthe software application as, through thesecriteria, behavior patterns are determined;

    ignoring one or more of the importantcriteria might lead to the loss of a

    particular part of the target group as thesoftware product doesnt correspond to

    their requirements;- check that all criteria used for the

    characterization of the target group has animpact on the behavior patters of theusers; if criteria that have no impact on the

    behavior patterns are considered, theapplication will be overloaded withfeatures and particularities that dontimprove the user experience, but increasethe complexity of the software and thenecessary of resources during the

    development cycle.For online applications success is given not

    by the offered service, not by the cleaninterface, not by the neat features but by thenumber of users. The larger the number ofactive users, the larger the number of

    potential users and the higher the perceptionof the applications quality are. Socialnetworks have clients for mobile devices andfor desktops. Not always the interface isstraightforward intuitive, clean or easy touse, but the very large number of usersattracts more and more users every day.The specifications definition stage of thedevelopment cycle is the one that gives thefirst insight of the real functionality of thesoftware through the eyes of the user.Messing up this stage means developing forno one as you will not respond to any needsof the users. The specifications must beexact, complete and correct. To ensure

    exactness one must verify that all measurableinput, output or process has limits defined forvalues. In the case of a variable, from theusers point of view, any wrong values must

    be highlighted in GUI along with a clear andeasy to read error. For string variables themaximum and minimum length must bespecified, for decimal numbers, themaximum number of decimals is important.In the case of algorithms the execution timeand memory consumption are the concerns.

    In the case of online applications the memoryconsumption never occurs to the user as mostof these run within a browser, but for the

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    7/18

    Informatica Economicvol. 17, no. 1/2013 119

    standalone applications memory is a realconcern. More important than the memoryconsumption is the execution time. No userwants an algorithm that solves a relatively

    simple problem to run over a long period oftime. Users dont actually know which is thecomplexity of the problem that the algorithmsolves, but they have a slight perception of it.The higher the perceived complexity, thehigher the time the user is willing to wait. Iftime consuming algorithms cant be avoided,one must verify that the GUI includes visualcues that give the user information about the

    progress of the task and, if possible, of theestimated remaining time. This makes the

    users stop the processing less often as theysee clearly that there is progress, thecomputations have not stopped and theremaining time decreases over time. In thecase of trivial problems for which therearent algorithms that have a completecoverage, it is preferable to limit the inputinterval and use a simple and efficientalgorithm, that use a time consuming one.Computing some values through one efficientalgorithm and others through a timeconsuming one that can return a value,without providing visual cues in GUI is not agood solution either. Without visualindicators the users will start to ask why forsome values the applications feedback isinstant and for others it takes forever, or atleast observable time. Without understandingthe process, they will soon doubt itscorrectness and start looking for alternatives.Providing some visual indicators, such as

    messages stating that the input value requiressome special processing that will take sometime, informs the users of the specialsituation they find themselves in and makesthem expect something to be different, in thiscase, the processing time.In the project building stage of thedevelopment cycle the data structures,functions and procedures, modules andinterdependencies are established. Thedefinition of the modules and

    interdependencies is very important for themodularity of the application and the order ofdevelopment for different modules and

    functionality. For this, one must verify that amodule that depends on another is notscheduled for development before that one orat the same time. Data structures are entities

    that are used by functions and procedures toperform tasks. Problems can be solved inmany ways. The difference between a poor

    piece of code ad a good and efficient one ismade by data structures and algorithms.Using the right data structure and the rightalgorithm enables the increase of thedimension of the input data. Given a simple

    problem, such as determining allcombinations of numbers from a set thatsummed up give zero, one can easily solve it

    by iterating through the set and creating allpossible combinations. If the resulting sum ofthe combination is zero the combination will

    be added to the solution list. This approach issimple and easy to implement, but the timeneeded for large data sets is very large. For aset of 10 elements, there are about 10^3combinations to be done, and thats not thatlarge, if not done frequently, but for a setonly 10 times larger, the needed time is 10^6and its about one thousand times larger thanthat of the previous data set. The increase ofthe input data set dimension by a factor of 10leads to the increase of the needed time by afactor of 10^3. It is clear that this algorithmis not suitable for solving this problem forlarge input data sets. An easy improvementof the algorithm is to calculate sums of 2elements and see if there is a third elementthat has the same value as the sum, butnegated. This reduces the factor by which the

    execution time multiplies.At this point, one must verify that no datastructures that were defined are unused. Ifthis is the case, those must be removed. Thealgorithms must be verified to see if all thedata structures they need have been defined.If there are data structures that have not beendefined, these must be defined. After thesetwo verifications, only the required datastructures will be defined.For functions and procedures one must verify

    the signature, meaning return type andparameters, if all parameters are used withinthe computations and if a result is returned.

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    8/18

    120 Informatica Economicvol. 17, no. 1/2013

    Table 1. Input data \ Results tableResults

    Input dataR1 R2 R3 R4 R5 R6

    D1 X XD2 X XD3 X XD4 XD5 X XD6D7 X

    For functions and procedures it is veryimportant that all data input is part of theresults. In Table 1 one can easily identify the

    useless data input (D6) and the results thatare not computed (R5). If rows where no Xhas been placed are identified that means thatthe corresponding component of the datainput set is not used for any results. Whencolumns where no X has been placed areidentified this means that the correspondingresult uses no input data for computations.

    When useless data is identified one mustverify it isnt needed and then remove it fromthe function/procedures signature. When

    results that use no input data are identified,these either are calculated from constants,and in this case they should be cached, eitherthere is a problem and input data has not

    been considered for them. For the last case,one must identify the necessary input dataand include it.

    Table 2.Input data \ Formulae tableFormulae

    Input data

    F1 F2 F3 F4 F5 F6

    D1 XD2D3 XD4 X XD5 X XD6 XD7 X

    These considerations are valid for the inputdata formulae relation. All input data must

    be used in at least one formula, and all

    complex formulae must use at least onecomponent of the data input set.

    Table 3.Results \ Formulae tableFormulae

    ResultsF1 F2 F3 F4 F5 F6

    R1 X XR2R3 XR4 XR5 XR6 X

    R7X

    In Table 3 the correlation between results andthe formulae used is presented. As before, no

    Xs on one row means that the result uses noformulae for the computation and no Xs on

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    9/18

    Informatica Economicvol. 17, no. 1/2013 121

    the columns means that the correspondingformula is used for the computation of noresult.For a given procedure:

    Public Return_type Name(Type1 P1,Type2P2, , TypekPk)

    one has to verify that:- k is the necessary one; no unnecessary

    parameters have been insert and nonecessary parameters have been omitted;

    - the parameters are in the correct order; asfunction overloading is based on the orderand type of parameters, the correct orderis essential;

    - all types (return type and parameterstypes) are the correct ones; assuming atleast one type is not the correct one, thefunction is useless.

    For aforloop:

    for (int i=0; i

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    10/18

    122 Informatica Economicvol. 17, no. 1/2013

    the expected ones, if the execution has beencorrect or not.The sample testing stageof the developmentcycle assumes the testing of the application

    with real data samples. Regardless theexperience of the QA team, the number oftests they can run is limited and cant coverall the functionality of the application withall possible values. The sample testing, onthe other hand, should used random samplesof data input that users utilize. These can beobtained by recording them from the users,

    but without saving sensitive date and byasking their agreement. By testing usingsamples, issues can be found that were

    passed by the QA team. Verification at thisstage means checking that the pool fromwhich the samples are extracted is largeenough and covers a very large part of thenew implemented functionality. Also onemust verify that the tested samples arecorrectly executed and they cover the newlyimplemented code.The documenti ng stage of the developmentcycle assumes the documentation of codethrough comments. At this point one mustverify that there are no complex codesegments that lack documentation and thatthe existing documentation is clear, specificand exact. If documentation lacks it must beadded and if it is not clear or precise enough,it must be reformulated.The implementation stage assumes thedistributed application is installed andconfigured on the clients server. This is oneof the final stages of the development cycle

    and of great importance as the real users willstart using the application after this step.Verification, at this point, means that allapplications components are installed at theright place, that the connection to thedatabase is correct, that the server has allcomponents needed for the application torun, that the server has Internet connectivityand other requirements specific to theapplication.The maintenance stage of the development

    cycle lasts between the implementation andthe removal from use. Its purpose is tocorrect any problems that were not identified

    and corrected during the development cycleand implement new features as per usersrequests. Verification, at this stage has thefollowing aspects:

    - verification of the issues that are visiblenow but they werent identified during thedevelopment cycle;

    - verification of new features implementedas per users requests.

    The verification of the issues that arediscovered during the users use assumeschecking which are the conditions underwhich the problem appears, checking if theanalyze of the root causes has been complete,checking if the solution covers all situations

    and provides the users with the desiredresults.The verification of new features means, firstof all, to check if the feature is reallyrequired by so many users as to be worthimplementing it. If this is the case, additionalverifications must be done: if the neededinput is available or additional changes must

    be done, if the extra feature can causeproblems with the existing application, if theextra features entry point is where the usersrequested it, if the planned functionality ifthe one the users demanded.The software reengineering stage of thedevelopment cycle happens when theapplication in cause is so hard to maintain asto justify a complete refactoring. Not allapplications pass through the softwarereengineering stage as not all of them last solong as to cause maintenance across a fewyears to cost as much or more as

    implementing the application again usingnewer technologies, but when it happens onemust verify that:- the newly chosen technology is fully

    compatible with the existing functionalityof the application;

    - the planned development process does notuse more resources that currentlyallocated;

    - the process is transparent to users;- the maintenance costs after the

    reengineering process will be significantlylower than the current ones;

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    11/18

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    12/18

    124 Informatica Economicvol. 17, no. 1/2013

    variable is assigned another value withoutthe prior one to be used in any way; thishappens as the developer starts with oneidea that he drops after half implementing

    it and starts developing based on anotheridea, but reusing the initial variables; theverification in this case assumes that thedeveloper follows the entire codesequence once it is written and ensure novariables are computed and discarded

    before using the computed value; if suchhappen, the computation must be removedfrom the sequence, as it is not onlyuseless, but also a burden for theefficiency of the code;

    - invariance management assumes thediscovery and elimination of operationsthat happen multiple times when theyshould only happen once; its notuncommon to have repetitive structures incode when iteration through the elementsof a collection is needed, a certain taskmust be executed for many input data;repetitive structures are places wheredisasters can occur if the developersfocus is not maintained during the whole

    process; let us assume we have a functionto be computed for one billion elements;each line code that forms that functionwill be executed one billion times for ourdata; each line of code we remove will not

    be executed; even small improvements insuch a function that is executed very oftenleads to spectacular increases in

    performance and system responsiveness;- common code grouping assumes to

    include as much code in a block aspossible and avoid repetitive atomicoperations; let us consider the followingcode sequence that computes the sum of avectors elements, the sum of the positiveelements and the sum of the negativeones:

    s = 0;sn = 0;sp = 0;for (int i=0; i

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    13/18

    Informatica Economicvol. 17, no. 1/2013 125

    be taken into consideration when manyfiles must be read or written in a shortamount of time; even if their size is notlarge, the whole operation will last some

    time because the HDD cant initiateread/write operations at a very large rate;to overcome this issue one must designthe storage so that fewer files areaccessed; with the information stored infewer logical bags, the number ofread/write operations is reduced and thusthe total operating time; another issue heremight be caused by the reading/writing ofvery small data segments; let us consider afile that has one million integers meaning

    around four million bytes and a codesequence that has to compute differentstatistical operations on the data; as onemillion integers dont use a lot of

    memory, once these are read from theHDD, they will be stored in RAM andused for computations; reading the valuesfrom the file is another story; if one readsthe integers one by one, the HDD willmake one million read operations; divided

    by the number of operations per secondwe can approximate the time needed forthe operations to complete; in a simpletest run with one hundred million chars,the data from Table 4 was obtained:

    Table 4.Times needed for different operations on same datasetRun

    Operation 1st 2nd 3rdWrite all 1123 873 996Write each 2480 2277 2415Read all 904 702 846Read each 4914 5085 4989

    one can easily see that the operations thatare done in bulk need much less time for

    completion than the atomic operations;- code duplication is something that all

    developers avoid due to issues that appearat maintenance and updating processes;sometimes, though, the duplication ofcode can save lots of time; functions and

    procedures calls can be time consumingand even outlast the time needed for the

    procedure to execute; when this is the caseand the procedures code is short andsimple, it is better to duplicate code thanto have terrible performance; duplicatingcode should not be a habit of anydeveloper and when such extremesituations appear, the duplication must be

    back-up-ed by serious documentation;- cost optimization assumes to obtain a set

    of predefined results with the minimumcosts; for this, one must verify that thechosen solution is the one that involvesthe smallest costs and he must take into

    account the teams training, the knowntechnologies, available licenses, etc;forgetting about an important cost factor

    might lead to choosing the not so cheapsolution and once the project starts few

    have the guts to step back and restart thewhole development machine.

    Through optimization of software machinesthat seem obsolete are used again, resourcesare saved and efforts are directed throughcontinuous development and optimization.

    5 Verification Processes of DistributedInformatics Applications and InfluenceFactors of VerificationIt must be established a very clear relation

    between testing, validation and verification.It is considered a distributed informaticsapplication currently in use, obtained bycovering the full development cycle phases.At some point a result R is desired. For this

    purpose there are selected the options . There is a procedure defined thatis executed by an operator for many timesand the success rate is very high.Verification in this case means to process the

    result R and to see if:- the structure at qualitative level, but alsoat quantitative level, is the expected one;

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    14/18

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    15/18

    Informatica Economicvol. 17, no. 1/2013 127

    - the payment amount is inserted;- the payment details are filled.Verification involves validating the accuracyof the IBAN and the correlation between the

    amount paid and the amount entered by theuser. In the case of a bill payment of 30RON, if the user enters the amount of 300RON, the verification involves comparingthe amount of the invoice with the onesubmitted in the electronic paymentapplication. For this, there must be a clear

    procedure for verification.In the case of the production process,verification is placed between the productionoperation and the product use operation.

    It is considered the procedure:P = ,where:

    I set of inputs;A set of operations;E set of results.The procedure is repetitive and involveseffective elements.

    P(mine)= ,In this case, verification is a routine matter tosee if the theoreticalI1, I2, ..., Ikare the samewith I1 mine, I2 mine, ..., Ikmine. The same thingmust be realized for operations and outputs.Verification means that Ptheoretical and Pmineare identical.Verification for outputs is actually exploitingthe results (outputs) to see if the user reallyuses them correctly, or the buttons work

    properly in the case of car mirrors.A relationship must be established betweenthe concepts of validation, control, testing,

    verification, in order to clearly distinguishthe difference between verification andothers.In the case of production process, the testingoperation appears to the end to see that the

    product is made according to thespecifications established. Testing also servesto quantify the percentage of thespecifications that were met. If we set athreshold alpha for product acceptance, thenall tests that have results above alpha are

    validated and all that fall below the alpha arerejected.

    Verification in audit processes [6] involvesactivities throughout the whole period whenthe team works to get a better result. Uponreceipt of the software product, the audit

    team checks for the following entries:- specific documentation;- test datasets;- source texts;- executable to be used by customers.The audit is based on these inputs, the qualityof the final result being influenced by theoutcome of the verification process of them[7].During the audit process, verifications aremade in order to give assurances that:

    - reports were built in compliance with allrequirements;

    - the indicators underpinning the decisionof acceptance or rejection are calculatedusing all representative data and thechosen indicators are appropriate to thespecific of the application that is subjectto audit.

    The final audit report should be verified to:- contain all the standard structure

    elements;- include all the arguments underlying the

    final decision;- provide a clear conclusion, so that the

    developer know what to do, based on solidarguments;

    - eliminate redundant elements;- manage the quality level;- provide a logical approach, gradual and

    rigorous.The verification of software applications at

    client level must show that:- performs basic functions (if theapplication is on mobile phone andrequires a GPS localization, on must see ifit make localization correctly);

    - the options are working and whether theyperform directions according to thekeywords generating alternatives;

    - validate data entered from the keyboardand how the re-input process is done (withintroduction only of the inaccurate data or

    of all);- erroneous data are marked with correctmessages;

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    16/18

    128 Informatica Economicvol. 17, no. 1/2013

    - between what the application require andthe data from invoices there is consistence(the number of invoices is different fromone utilities provider to another, each

    utility provider having its own encoding,missing a standardization, so that for themobile phone, for the energy, for the gasthe invoices look different, the field

    position is given by a so-called absurd andunnecessary custom design;

    - the IBAN account position on the invoiceand the lack of contracts between utilities

    providers and banks make the datadifficult to enter and the pre-filled

    payment orders does not exist;

    - in addition, the lack of transparency andintegration of databases (banks does notread the databases of utilities providers)make also difficult the data entry on

    payment orders.We must verify the consistence between thedata from the invoice and the ones on the

    payment order or verification of any otherdata from the documents, and only after thisstep we can accept and validate the payment.Even if the software product indicates errorsand it returns to the initial state with errormessages or with incorrect fields that arecolored in red, verification saves us thereintroduction and validation of data.It is worse when we select a resource or if the

    payment amount introduced is higher(wrong), because the allocation of resourcesis already made and costs are incurred andalso is necessary time consuming to fix orspend (for hotel reservation, if we realize the

    day before the accommodation is paid thatwe want to cancel, then the accommodationmust be paid because the reservation cannot

    be canceled).We can say that it is verified the ease toidentify a product or a service, knowing thatthe free search function, where the userenters a string, if it is not accompanied bysearches using flexible algorithms based onsimilarity, will never lead to find the productor street or town.

    6 ConclusionsSoftware testing proved a vital stage of thedevelopment cycle from the first pieces ofsoftware ever realized. There is no such thing

    as software without bugs and withoutextensive testing their number would be byfar larger in any software product. Even asthe technologies evolve and numerousautomated testing products appear, as theapplications become more and morecomplex, the number of bugs in softwaredecreases very slowly. Due to the increasedcomplexity of the software, the number oftest cases increases much more than theability of automated tools and thus many

    cases remain uncovered.In order to ease the strain on the testing

    process, the software verification is donemainly by those designing and developingthe software products. After the verificationthere are small chances that major errors willappear further on, thus saving importantresources and limiting the amount of strain inthe bug-fixing period. Not only the strain isreduced during bug-fixing, but also duringdevelopment as correct specifications andcode sequences lead to a lower rate of issues

    between developers. Even if the developersallocate around twenty percent of thedevelopment time to designing and executingdev-tests, the verification is a vital process asit can eliminate most of the issues even

    before they cause any trouble.The verification of the software optimalityassumes the consideration of criteria andareas to work on. Optimizations are done in

    order to decrease the time needed forexecution, to decrease the memory footprint,to lower costs. In areas where there are

    plenty of technological and performancelimitations the optimization of softwaremakes things possible.Distributed informatics applications should

    be standardized. Verification involves seeinghow easy the information can be accessed,how flexible is the application, in order toverify that the application is user friendly.

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    17/18

    Informatica Economicvol. 17, no. 1/2013 129

    References[1] I. Ivan, B. Vintil, C. Ciurea, D.

    Palaghita, S. Pavel, Autotesting of theCitizen Oriented Informatics

    Applications, Ekonomika, statistika iinformatika. Vestnik UMO, MESI, Russia,

    No. 4, 2009, ISSN 1994-7844.[2] P. Pocatilu, C. Ciurea, Collaborative

    Systems Testing, Journal of AppliedQuantitative Methods, Vol. 4, No. 3,2009, pp. 394-405, ISSN 18424562.

    [3] I. Ivan, C. Boja, A. Zamfiroiu, Procesede Emulare pentru Testarea AplicaiilorMobile, Revista Romn de Informatic

    i Automatic, Vol. 22, No. 1, 2012, pp. 5-

    16, ISSN 1220-1758.[4] H. Eto, T. Dohi, Optimality of Control-

    Limit Type of Software RejuvenationPolicy, Proceedings of 11th International

    Conference on Parallel and DistributedSystems, Vol. 2, pp. 483-487, 22-22 July2005.

    [5] C. Boja, M. Popa, I. Niescu,

    Characteristics for SoftwareOptimization Projects, InformaticaEconomic, Vol. 12, No. 1(45), 2008, pp.46-51,ISSN 1453-1305.

    [6] M. Popa, Techniques and Methods toImprove the Audit Process of theDistributed Informatics Systems Based onMetric System, Informatica Economic,Vol. 15, No. 2, 2011, pp. 69-78, ISSN1453-1305.

    [7] M. Popa, C. Toma, C. Amancei,

    Characteristics of the Audit Processes forDistributed Informatics Systems,

    Informatica Economic, Vol. 13, No. 3,2009, pp. 165-178, ISSN 1453-1305.

    Ion IVANhas graduated the Faculty of Economic Computation and EconomicCybernetics in 1970. He holds a PhD diploma in Economics from 1978 and hehad gone through all didactic positions since 1970 when he joined the staff ofthe Bucharest Academy of Economic Studies. He is the author of more than 25

    books and over 75 journal articles in the field of software quality management,software metrics and informatics audit. His work focuses on the analysis ofquality of software applications. He has participated in the scientific committee

    of more than 20 Conferences on Informatics and he has coordinated the appearance of 3proceedings volumes for International Conferences. From 1994 he is PhD coordinator in thefield of Economic Informatics. His main interest fields are: software metrics, optimization ofinformatics applications, developments and assessment of the text entities, efficiencyimplementation analysis of the ethical codes in informatics field, software quality managementand data quality management.

    Cristian CIUREAhas a background in computer science and is interested in

    collaborative systems related issues. He has graduated the Faculty ofEconomic Cybernetics, Statistics and Informatics from the Bucharest Academyof Economic Studies in 2007. He has a master in Informatics ProjectManagement (2010) and a PhD in Economic Informatics (2011) from theAcademy of Economic Studies. Other fields of interest include softwaremetrics, data structures, object oriented programming in C++, windows

    applications programming in C# and mobile devices programming in Java.

  • 8/12/2019 10 - Ivan, Ciurea, Vintila, Nosca

    18/18

    130 Informatica Economicvol. 17, no. 1/2013

    Bogdan VINTIL graduated the Bucharest University of Economics, theFaculty of Cybernetics, Statistics and Economic Informatics. He finished thePhD in the field of Economic Informatics at University of Economics in 2011.He is interested in citizen oriented informatics applications, developing

    applications with large number of users and large data volumes, e-government,e-business, project management, applications' security and applications' qualitycharacteristics.

    Gheorghe NOCA graduated Mechanical Faculty at Military TechnicalAcademy in 1981, and Cybernetics, Statistics and Informatics EconomicsFaculty at Academy of Economics Studies in 1992. He obtained his PhDdegree in Economics, Cybernetics and Statistics Economics specialty in 2003.He is currently researcher at Association for Development through Science andEducation. He has published (in co-operation) 3 books, 16 articles ininformatics journals. He has taken part in about 20 national and international

    conferences and symposiums. His research interests include data quality, data qualitymanagement, software quality cost, informatics audit, and competitive intelligence.