ࡱ> q` jbjbjqPqP Z::e::::NUUU@VzWNӂ<zXzXzXzXzXaaa"$$$$$$$hwHEja"ajjH::zXzX) rrrj:8zXzX"rj"rrR{r}zXnX v{Um|~0ӂ|fCoC<}C}ladrf3haaaHH[r^aaaӂjjjjNNN9NNN9NNN::::::  Evaluating the Educational Influence of aN e-learning system Ani Grubiai, Slavomir Stankov, Branko }itko Faculty of Natural Sciences, Mathematics and Education Nikole Tesle 12, 21000 Split, Croatia Phone: (385) 21-38 51 33-105, Fax: (385) 21-38 54 31 E-mail: ani.grubisic { slavomir.stankov, branko.bzitko}@pmfst.hr Abstract: Nowadays educational systems present their users (teachers and students) an intelligent environment in order to enhance the learning and teaching process. The goal of e-learning system developers is to build such systems that will create individualized instruction to get as close as possible to the 2-sigma boundary. Because of the fact that acquisition of knowledge is often an expensive and time-consuming process, it is important to know whether it actually improves student performance. In this paper we are going to present our approach about evaluating the educational influence of a e-learning system as well as some results on the evaluation of the e-learning system educational effectiveness in augmenting students' accomplishments for a particular knowledge domain by using the effect size as the metrics. By doing so, we determine whether and in which degree an e-learning system increases students performance and can, therefore, be an adequate alternative for human tutors. Copyright 2002 IFAC Keywords: e-learning systems, evaluation, educational influence, effect size 1. Introduction Evaluation is useful for investigation and exploration of the different and innovative ways in which technologies are being used to support learning and teaching. All instructional software should be evaluated before being used in educational process. Developers of the e-learning systems have become so involved in making their system work, that they have forgotten their original goal: to build an e-learning system that is as good or even better than highly successful human tutors. Moreover, they have paid little attention to the process of evaluation as they are required to be able to say something about the outcomes of an e-learning system. Since the major goal of an e-learning system is to teach, its evaluations main test is to determine whether students learn effectively from it (Mark and Greer, 1993). A useful definition of evaluation could be that evaluation is providing information to make decisions about the product or process (Phillips and Gilding, 2003). A well-designed evaluation should provide the evidence, if a specific approach has been successful and of potential value to others (Dempster, 2004). It incorporates principles and methods used in other fields of educational or social science research. Each methodology represents a different approach to evaluation. The fact that there are so many Table 1 A brief history of e-learning systems evaluations (modified according to (Harvey, 1998)) DecadeEvaluation1960sControlled, experimental studies. Learning is still regarded as independent of subject or context.1970sStill predominantly experimental process oriented descriptions. Methods include interviews, questionnaires, profiles, think aloud protocols, observations etc.1980s Experimental methods consistently fail to produce sufficient detail for designers - and evaluators purposes in formative and summative studies. Usability studies take precedence over learning evaluation. Results of formative evaluation and various forms of user testing become important inputs to development, and the iterative design cycle is established.1990sMethods must accommodate situations where teachers and learners may never meet face to face. Evaluation is now accepted as an important and ongoing aspect of program and course improvement, the importance of context is undisputed. Part of an ongoing process which feeds back into a plan - implement - evaluate - improve loop. Studies involve qualitative and quantitative measures as appropriate. approaches in common use simply reflects the fact that no single methodology is the best. Which one will be most appropriate for you depends on the type of questions you are asking. A unique model for evaluating e-learning systems is hard to define. Effective evaluation should include an examination of the pedagogical aspect and results of the learning and teaching process supported by evaluated e-learning system. It can help to ensure that learning technologies are developed and adopted in ways that support learners and teachers to realize their goals. In this paper, we present a proposition of the e-learning systems evaluation methodology. We give an overview of existing evaluation methods as well as the methodology that can be used for evaluating the e-learning systems process. 2. Evaluations methods and instruments Given the variety of educational system evaluation methods, it is not as easy to decide which one is appropriate in a particular context (Iqbal, et al., 1999). Basically, there are two main types of evaluation methods (Frye, et al., 1988): formative and summative. Formative evaluation focuses on improvements to products and processes which are being developed. It is often a part of a software engineering methodology where it is used to obtain information needed for modifying and improving a systems functionality. The purpose of formative evaluation is to inform on-going processes and practices. It is important therefore that the findings are ready in time to enable you to make appropriate changes to your approaches or recommendations. Formative evaluation doesnt only concern itself with the e-learning system product, but also with the learning processes of students and our performance as teachers. Summative evaluation is concerned with the evaluation of completed systems and tends to resolve, for e.g., such questions as: "What is the educational influence of an e-learning system on students?", "What does a particular e-learning system do?", "Does an e-learning system fulfill the purpose for which it was designed?", "Does an e-learning system result in predicted outcomes?" To summatively evaluate the effectiveness of e-learning system on student learning, we first need e-learning system which works in the way that it should. We also need to be clear about the type of learning the e-learning system is designed to achieve. While planning an evaluation some of the tasks may need to be undertaken before you start development or implementation, such as collecting baseline information (pre-test data) for later comparison with currently existing conditions. Evaluation should be a planned, systematic but also open process; you should aim to incorporate opportunities for discovering the unexpected. All evaluation methods, irrespective of their type, are classified along two dimensions (Fig. 1.) (Iqbal, et al., 1999). The first dimension focuses on the degree of evaluation covered by the evaluating method. If the method only concentrates on testing a component of a system, it can be considered suitable for internal evaluation. If the method evaluates whole system, it is suitable for external evaluation. The second dimension differentiates between experimental research and exploratory research. Experimental research requires experiments that change the independent variable(s) while measuring the dependent variable(s) and require statistically significant groups. Exploratory research includes in-depth study of the system in a natural context using multiple sources of data, usually where the sample size is small and the area is poorly understood. A well-designed evaluation incorporates a mix of techniques to build up a coherent picture. An evaluation answers the questions for which it was designed, hence the first step in research design is the identification of a research question. Hypotheses can be formed after identifying a research question, which must be testable, concerned with specific conditions and results, and possible to confirm or deny on the basis of those conditions and results. An evaluation methodology is then defined to enable the researcher to examine the hypothesis. When a practical, suitable evaluation method has been found Table 2 Process of experimental research (modified according to (Harvey, 1998)) PhaseDescriptionDescribe the interventionDescribe exactly what will be different in the students' experience after the change you propose as compared to the current situation.Define the parameters Only part of the class will experience the new learning situation, and their performance will be compared with that of their colleagues who have not experienced the change. You plan to continue with your normal practice and compare the learning outcomes of your students with those who have experienced the new learning situation.Define successDecide what outcome would be needed for you to consider your experiment to be a success. Decide how to measure successfulnessDecide how outcome can best be measured. Analyze your data.Analysis of data gathered through an experimental approach will most likely focus on deciding whether your innovation has had the predicted effect. Is there a difference to be seen in the outcome measure(s) gathered between your control and experimental situation? Is the difference in the direction which was predicted? And is the difference statistically significant? If it appears that differences do exist, then proceed to some test of statistical significance.  to answer the research question, the researcher can carry out the study and analyze data gathered through the study. Ideally, if results do not confirm the research hypothesis, researchers should be able to suggest possible explanations for their results. The way in which you select your student sample will have an effect both on the information gathered and the impact that your findings might have. If you pick your own sample of students, you have the opportunity to select the students who are likely to be most co-operative or a group of students with the most appropriate skill levels. You can also select a random sample of students in order to try and get a more representative cross section from the class. You should watch that by selecting one group from a class and involving them in the evaluation study, you are not perceived as giving one group of students better support or tutoring than the rest of the class. It can happen that students complain about being discouraged from their peer group in some way (Harvey, 1998). 3. Evaluating the educational influence of e-learning system Experimental techniques are often used for summative research, where formal power is desired and where overall conclusions are desired. What is common in psychology and education (Mark and Greer, 1993), is that experimental research is suited to e-learning system because it enables researchers to examine relationships between teaching interferences and students teaching results, and to obtain quantitative measures of the significance of such relationships. Different evaluation methods are suitable for different purposes and the development of evaluation is a complex process. In a variety of different experimental designs, we have decided to describe the usage of the pre-and-post test control group experimental designs that enable determining the effects of particular factors or aspects of the evaluated system. Every educational innovation is an experiment in some sense of the word; you change something about the students' experience, predicting that better learning will take place. A controlled experiment is a way of teasing out the details of just which aspects of your innovation are influencing the outcomes you are considering and bringing about the changes you observe. The experimental method is a way of thinking about the evaluation process such that all the possible sources of influence are kept in mind. 3.1 Pre and post testing The idea of pre and post testing of students is often accepted as a viable instrument to assess the extent to which an educational intervention has had an impact on student learning. Pre and post testing is used because we know that students with different skills and backgrounds come to study a particular subject. We also need to establish a base measure of their knowledge and understanding of a topic in order to be able to quantify the extent of any changes in this knowledge or understanding by the end of a particular period of learning. Ideally, we wish to know not only that the educational intervention has had an impact on the student, hopefully a positive one, but we also want to be able to quantify that impact. The process should require students who are undertaking a test to determine some individual starting level of knowledge or understanding of a topic. At a later point they should undertake the exactly comparable test to determine the extent to Table 3 Process of pre and post testing (modified according to (Harvey, 1998)) PhaseDescriptionTest groupStudent test group of at least 30 students. Familiarization with e-learning systemAlthough an e-learning system might be simple to use it is important to ensure that students are familiar with all aspects of how to use the various features. You could consider organizing a familiarization session prior to your evaluation. Pre and post testing1. work around the e-learning system Think about how much of the subject content they need to know before a pre test. Post test immediately after they have completed their study of the material in the e-learning system. 2. selection of groups for two alternative modes of learning One group can use the e-learning system as a substitute for lectures (on at least 2 occasions). The second group can follow the standard lecture programme. Both groups should undertake pre and post tests. 3. work around the lecture At this stage all students take the e-learning system unit prior to the delivery of the lecture in the topic. The pre and post testing is delivered immediately prior to and immediately after the lecture. These tests could be online or paper-based.Analysis of resultsThe various tests will provide a huge amount of data - some of it will be raw numeric data that can be analyzed using standard statistical tests.  which knowledge and understanding has been improved by the educational intervention. The design of the pre and post questions is critical to success. The repetition of the same test questions is obviously not a sound solution to achieving comparability but it is a good idea to retain a proportion of the original test materials and to blend this with new questions which examine the same expected learning outcomes. It is also important to consider the type of questions which is used. Certainly we should not rely purely on objective questions. However, extended questions which seek to test a whole range of issues are also inappropriate. 3.2 Process of evaluation For purposes of e-learning system evaluation, students that are picked to be part of experiment have to be randomly and equally divided into Control group and Experimental group. The Control group will be involved in traditional learning and teaching process and the Experimental group will use e-learning system. Both types of treatment should be scheduled for two hours weekly throughout one semester (2hr/week x 15 weeks = 30 hours/semester). Both groups will take a 45-minute paper-and-pen pre-test that will be distributed at the very beginning of the course. Also, both groups will take a 60-minute paper-and-pen post-test that two weeks after the end of the course. Their results will be scored on a 0-100 scale. The pre-test enables to obtain information on the existence of statistically significant differences between the groups concerning students foreknowledge. However, the post-test enables to obtain information on the existence of statistically significant difference between the groups concerning evaluation influence of the e-learning system. 3.3 Analysis of results Data analysis techniques are best chosen in relation to the types of data you have collected. Quantitative data will rely on correlation and regression methods t tests, analysis of variance, chi square as statistical outputs. Qualitative data may include transcripts from questionnaires, interviews or focus groups. Interpreting the results of evaluation is difficult. In terms of the students perception of the experience, for example, do students like it because its new, or hate it because its unfamiliar? You might ask would the student wish to use the e-learning system again and what improvements would they like to see. In terms of student performance, is it possible to isolate the effect of the new medium; is any change in scores the result of having a different group of student? Students will not always express their feelings, preferences, goals, or any changes in their study behaviors using the same words. There may be cultural or gender issues that influence what and how students say something. All these factors may distort the evaluation (Dempster, 2004). The t-test is the most commonly used method to evaluate the differences between two groups. Since the primary intention of the e-learning system educational influence evaluation is to valuate the overall effectiveness and the effect size of e-learning system, so t-value of means of gains of test scores among the two groups have to be computed and compared (StatSoft, 2004). The p-value reported with a t-test represents the probability of error involved in accepting our research hypothesis about the existence of a difference. The critical region is the region of the probability distribution which rejects the null hypothesis. Its limit, called the critical value, is defined by the specified significance level. The most commonly used significance level is 0.05. The null hypothesis is rejected when either the t-value exceeds the critical value at the chosen significance level or the p-value is smaller than the chosen significance level. The null hypothesis is not rejected when either the t-value is less than the critical value at the chosen significance level or the p-value is greater than the chosen significance level. In the t-test analysis, comparisons of means and measures of variation in the two groups can be visualized in box-and-whisker plots. These graphs help in quickly evaluating and "intuitively visualizing" the strength of the relation between the grouping and the dependent variable. First, it has to be checked whether groups initial competencies were equivalent before comparing the gains of the groups. That means calculating the mean of pre-test score of both groups with their standard deviation. Then the t-values of pre-test means have to be computed to determine if there is reliable difference between two groups. Now, hypotheses have to be stated, for example: There is a significant difference between the Control and the Experimental group. Next, the gain scores from pre-test to post-test are to be compared. That means calculating the mean of both groups with their standard deviation. Then the t-values of means of gain scores have to be computed to determine if there is a reliable difference between the Control and the Experimental group. If there is statistically significant difference, it implies that e-learning system had a positive effect on the students understanding of the domain knowledge. In other words, our hypothesis is accepted. The effect size is a standard way to compare the results of two pedagogical experiments. Effect size can be calculated by using different formulas and approaches, and its values can diverge. In our approach to evaluating the educational influence of an e-learning system, the average effect size has to be computed in order to get a unique effect size that can be used in some meta-analysis studies. There are four types of effect size: standardized mean difference, correlation, explained variance, and interclass correlation coefficient, according to (Mohammad, 1998). For determining group differences in experimental research, the use of standardized mean difference is recommended (Mohammad, 1998). The standardized mean difference is calculated by dividing the difference between experimental and control group means by the standard deviation of the control group. The following formula is used for the calculation of this standardized score:  , (1) where Xe = mean of the experimental group; Xc = mean of the control group; sc = standard deviation of the control group. The mean or arithmetic average is the most widely used measure of central tendency, and the standard deviation is the most useful measure of variability, or spread of scores. Effect sizes can also be computed as the difference between the control and experimental post-test mean scores divided by the average standard deviation. According to (Frye, et al., 1988) the effect size can be calculated using this formula: =(post-test)-(pre-test). (2) Effect size can be calculated using different formulas and approaches, and its values can diverge. In our approach to evaluating the educational influence of a e-learning system, we propose computing the average effect size in order to get a unique effect size that can be used in some meta-analysis studies. 4. Conclusion As we have stated, all instructional software should be evaluated before being used in educational process. A unique model for evaluation of the e-learning systems is hard to define and methodology we have presented in this paper can ease the search. Presented evaluation methodology for e-learning systems bases itself on experimental research with usage of pre-and-post test control group experimental designs. Pre and post testing is a practical instrument to appraise the amount of educational influence of a certain educational intervention. When it comes to interpreting the results of evaluation, the t-test is the most commonly used method to evaluate the differences between two groups. First, it has to be checked whether groups initial competencies were equivalent before comparing the gains of the groups. Next, the gain scores from pre-test to post-test are to be compared. This evaluation methodology has been used to evaluate educational influence of the Web-based intelligent authoring shell Distributed Tutor Expert System (DTEx-Sys) (Stankov, 2004). The DTEx-Sys effect size of 0.82 is slightly less than 0.84, a standard value for the intelligent tutoring systems (according to (Fletcher, 2003)). Acknowledgements This work has been carried out within projects 0177110 Computational and didactical aspects of intelligent authoring tools in education and TP-02/0177-01 Web oriented intelligent hypermedial authoring shell, both funded by the Ministry of Science and Technology of the Republic of Croatia. REFERENCES Cook, J. (2002). Evaluating Learning Technology Resources, LTSN Generic Centre, University of Bristol Harvey, J. (ed.) (1998) Evaluation Cookbook. Learning Technology Dissemination Initiative, Institute for Computer Based Learning, Edinburgh: Heriot-Watt University. Dempster, J. (2004). Evaluating e-learning developments: An overview, available at: www.warwick.ac.uk/go/cap/resources/eguides Fletcher, J.D. (2003). Evidence for Learning From Technology-Assisted Instruction. In: Technology applications in education: a learning view, (H.F. ONeal, R.S. Perez (Ed.)), Mahwah, NJ: Lawrence Erlbaum Associates, pp.79-99 Frye, D., D.C. Littman and E. Soloway (1988). The next wave of problems in ITS: Confronting the "user issues" of interface design and system evaluation. In: Intelligent tutoring systems: Lessons learned. (J. Psotka, L.D. Massey, S.A. Mutter and J.S. Brown (Ed)), Hillsdale, NJ: Lawrence Erlbaum Associates Heffernan, N. T (2001) Intelligent Tutoring Systems have Forgotten the Tutor: Adding a Cognitive Model of Human Tutors, dissertation, Computer Science Department, School of Computer Science, Carnegie Mellon University. Iqbal, A., R. Oppermann, A. Patel and Kinshuk (1999). A Classification of Evaluation Methods for Intelligent Tutoring Systems. In: Software Ergonomie '99 - Design von Informationswelten (U. Arend, E. Eberleh and K. Pitschke. (Ed)), B. G. Teubner, Stuttgart, Leipzig, pp. 169-181. Patel, A. and Kinshuk (1996). Applied Artificial Intelligence for Teaching Numeric Topics in Engineering Disciplines, Lecture Notes in Computer Science, 1108, pp. 132-140. Mark, M.A. and J.E. Greer (1993). Evaluation methodologies for intelligent tutoring systems. Journal of Artificial Intelligence and Education, 4 (2/3), pp. 129-153. Mohammad, N.Y. (1998). Meta-analysis of the effectiveness of computer-assisted instruction in technical education and training, doctoral dissertation, Virginia Polytechnic Institute and State University, Blacksburg, Virginia Phillips, R. and T. Gilding (2003). Approaches to evaluating the effect of ICT on student learning. ALT Starter Guide 8, available at: http://www.warwick.ac.uk/ETS/Resources/evaluation.htm Stankov, S., V.TVz|~ }oaPB4hChq*6CJmH sH hCht3"6CJmH sH  hEhCJOJQJmHsHhEh5CJmHsHhEhE5CJmHsHhE5CJmHsHhEht3"5CJmHsHhq*5CJmHsHhEhq*5CJmHsH hChCJOJQJmH sH hCh5;CJmH sH h'5;CJmH sH hCht3"5;CJmH sH $hCh5B*CJmH phsH |~L  B C D E DE$$ ,b,H  X(]^a$&$ 20Hp@ P ]a$gdt3"#$ 20Hp@ P ]a$$ `0Ha$>dh     ( ) * 1 A B E O -.CDEOحwf\NAhCh0K1CJmH sH hCh0K1;CJmH sH h~CJmH sH  hChCJOJQJmH sH hCh6CJmH sH h)CJmH sH hChCJmH sH  hChCJOJQJmH sH #hCh6CJOJQJmH sH hE6CJmH sH hq*hq*6CJmH sH hCht3"6CJmH sH hChq*6CJmH sH hq*6CJmH sH EuuV$ 20Hp@ P a$ 20Hp@ P ]#$ 20Hp@ P ]a$#$ 20Hp@ P ]a$$$ ,b,H  X(]^a$~CDxhhhx$ `0a$gd-("$ 20Hp@ P a$gdh"$ 20Hp@ P a$gd0K1#$ ,Hp@ P 7$8$H$a$gd(7$ 20Hp@ P a$ ~3ABCDVWηxxxmxUGhCh'56mH sH .hCh'56CJOJQJ^JaJmH sH h,=>*CJmH sH h'>*CJmH sH hCh-(>*CJmH sH hChDq>*CJmH sH h?U=>*CJmH sH hCh-(CJmH sH hCh"CCJmH sH h"CCJmH sH hq;h"CCJmH sH h0K1CJmH sH h(7CJmH sH  hChCJOJQJmH sH DKVW]taQY d$IfgdY$ d$Ifa$gdzkd$$Ifl0Ip#I' t0p#644 lalY$ $Ifa$gdW}~fgop:;hijkʧvevWehCh(;CJmH sH  hCh(CJOJQJmH sH hCh(CJmH sH hq;h(CJmH sH hq;hq;CJmH sH h"CCJmH sH hCh-(CJmH sH +hChmM6CJOJQJ^JaJmH sH hCh'6mH sH %h'6CJOJQJ^JaJmH sH +hCh'6CJOJQJ^JaJmH sH fqaY d$IfgdY$ d$Ifa$gdzkd$$Ifl0Ip#I' t0p#644 lalfgoqaY d$IfgdY$ d$Ifa$gdzkd4$$Ifl0Ip#I' t0p#644 lalhqaY d$IfgdY$ d$Ifa$gdzkd$$Ifl0Ip#I' t0p#644 lalhijkbRRRRB$ w0a$gd($ w0a$gdq;"$ 20Hp@ P a$gdhzkdn$$Ifl0Ip#I' t0p#644 lalCD!!h"$ 20Hp@ P a$gdh$ w0hh^h`a$"$ 20Hp@ P a$gd("$ 20Hp@ P a$gd($ w0hh^h`a$gd( BJCD!!!! #7#8#:#####&&&&&(ƹzlzbUKh^.CJmH sH hChgCJmH sH h[_CJmH sH hChF`6CJmH sH hChF`CJmH sH h"CCJmH sH h)CJmH sH hCh[_CJmH sH  hChhCJOJQJmH sH hChhCJmH sH h(CJmH sH hCh(6CJmH sH hCh(CJmH sH (jhCh"CJUmHnHsH u!9#:#&&(((K)L)R)^)}t $Ifgd $$Ifa$gd$ `0a$gdQj"$ 20Hp@ P a$gd?U="$ 20Hp@ P a$gdR"$ 20Hp@ P a$gd[_ (((($)J)K)L)^)_)x)** *****3+<+˳odoSoSoB!hr5>*B*CJmH phsH !h0 5>*B*CJmH phsH hChQjmH sH 'hChQj5>*B*CJmH phsH !hag5>*B*CJmH phsH hChQj5mH sH $hChQj>*B*CJmH phsH hCh?U=CJmH sH h>*CJmH sH hChQj>*CJmH sH hChQjCJmH sH hChRCJmH sH hCh"CCJmH sH ^)_)y)*xo $Ifgd $$Ifa$gdzkd $$Ifl03p#3= t0p#644 lal****d+x\\ & F HP$If^`Pgd0 $$Ifa$gdzkd$$Ifl03p#3= t0p#644 lal<+F+G+b+d+e+++ ,!,(,.. . ./222 2#2ƻƻƻƛwj]SF9hCh-.@CJmH sH hCh"CCJmH sH heFCJmH sH h0heFCJmH sH h0h0CJmH sH hCheFCJmH sH h~zCJmH sH hChQjCJmH sH hChQj]mH sH 'hChmM5>*B*CJmH phsH hChQjmH sH 'hChQj5>*B*CJmH phsH !hr5>*B*CJmH phsH 'hChr5>*B*CJmH phsH d+e+v++xo $Ifgd $$Ifa$gdzkd?$$Ifl03p#3= t0p#644 lal+++ ,xo $Ifgd $$Ifa$gdzkd$$Ifl03p#3= t0p#644 lal ,!,4,.xo $Ifgd $$Ifa$gdzkdm$$Ifl03p#3= t0p#644 lal.. . . . . . /bbbbb@"$ 20Hp@ P a$gdeF"$ 20Hp@ P a$gd?U=zkd$$Ifl03p#3= t0p#644 lal //22 2]2^2-4.455]]]]"$ 20Hp@ P a$gd0"$ 20Hp@ P a$gd-.@"$ 20Hp@ P a$gd-.@"$ 20Hp@ P a$gdeF$ w0h^`a$gdeF #2]2^2777777777<:=::::;;Ƽuku^TG:hCh4yCJmH sH h/~@h/~@CJmH sH h~zCJmH sH h# hU!CJmH sH h# CJmH sH h# h# CJmH sH  hChCJOJQJmH sH hCh6CJmH sH hChU!6CJmH sH hCh"CCJmH sH h?U=CJmH sH h0h?U=CJmH sH h0h0CJmH sH  hCh-.@CJOJQJmH sH hCh-.@;CJmH sH 577777l8m8::;q_$ w0ha$gd/~@$ w0ha$gd~z$ w0ha$gd# $ w0ha$$ w0hh^h`a$"$ 20Hp@ P a$gd?U="$ 20Hp@ P a$gd0 ;;;;;;;; $Ifgd $$Ifa$gd$ w0ha$gdU!$ `0a$gd[!$ w0ha$gdvG$ w0ha$gd/~@;;;;;;;;;;;< </<Q<R<<===9=:=O=P===+>ɶzfUffzUD!ha5>*B*CJmH phsH !hYtx5>*B*CJmH phsH 'hCh" 5>*B*CJmH phsH hCh[!mH sH 'hCh[!5>*B*CJmH phsH !hM5>*B*CJmH phsH h h[!5mH sH $h h[!>*B*CJmH phsH  hChU!CJOJQJmH sH h>*CJmH sH hCh[!>*CJmH sH hCh[!CJmH sH ;;;<xo $Ifgd $$Ifa$gdzkd$$Ifl0p# t0p#644 lal< <G<9=xo $Ifgd $$Ifa$gdzkd8$$Ifl0p# t0p#644 lal9=:=O=t=+>h>5?P?H@xo^o^o^ $If^gdYtx $Ifgd $$Ifa$gdzkd$$Ifl0p# t0p#644 lal+>->??5?7?H@I@@@@@@@@KALACCxCyC{CCڽکڞqdZdZdZL>hChvG6CJmH sH hCh ;6CJmH sH h/~@CJmH sH h/~@h/~@CJmH sH hYCJOJQJmH sH h^CJOJQJmH sH  hCh[!CJOJQJmH sH hCh[!mH sH 'hCh" 5>*B*CJmH phsH hCh[!]mH sH !hNT5>*B*CJmH phsH 'hCh[!5>*B*CJmH phsH !hYtx5>*B*CJmH phsH H@I@]@@xo $Ifgd $$Ifa$gdzkdf$$Ifl0p# t0p#644 lal@@@@@@@KALAyCrrrrr```$ w0ha$gd/~@$ w0ha$gdU!zkd$$Ifl0p# t0p#644 lal yCzC{CCCUEVEGGGIILLMMQQxSySwU$ w0ha$gd.$ h7$8$H$a$gd$ w0ha$gdvG$ w0hh^h`a$gdvGCCGGG-Y.Y/Y0Y1YZZ[%[\[][\B\D\F\ ] ]]]]ʽseWJ@J@Jh7CJmH sH h7h7CJmH sH hkhCJOJQJmH sH hvGCJOJQJmH sH hChE_CJmH sH hg#CJmH sH hChvGCJmH sH j hChvGUmH sH hChkhCJmH sH hvGCJmH sH h.hvGCJmH sH h.h.CJmH sH hChvG6CJmH sH hCJmH sH  hChvGCJOJQJmH sH wUxUWW.Y/Y0Y>Y?YiZjZ\[][^[B\D\F\X^Y^Z^"$ 20Hp@ P a$gd$ w0ha$gdvG$ w0ha$gdvG$ w0ha$gd.]]]]V^W^X^Y^Z^]^h^i^0c1c2cDcfdqdrdxdŷzl_RA4hi7hi7CJmH sH  hChCJOJQJmH sH hChCJmH sH hCh`CJmH sH hCh`;CJmH sH  hCh`CJOJQJmH sH  hChCJOJQJmH sH h.CJmH sH  hChCJOJQJmH sH hCh;CJmH sH hChCJmH sH hChYCJmH sH hCJmH sH h7h7CJmH sH h7CJmH sH Z^h^i^f_g_``aa0c1c2crr$ `0S J a$$ h7$8$H$a$gd.#$ ,Hp@ P 7$8$H$a$gd."$ 20Hp@ P a$gd"$ 20Hp@ P a$gd 2cCcDcfdqdrdd~eefhhj~~~$ w0hp@ V^`Va$gdyg$ w0hp@ V^`Va$gdDj$ w0ha$$ `0pa$$ `0S J a$gd`$ w0ha$gd` xdzdddddd}e~eeeeeeeff ffNfOfPfQfRfSfTffffffff|ggŻŻreWhChDj6CJmH sH h7mh7mCJmH sH h7mh7m6CJmH sH h7mCJmH sH hCJmH sH hhCJmH sH h 8ChDjCJmH sH h 8Ch 8CCJmH sH h 8CCJmH sH hChDjCJmH sH htCJmH sH hthtCJmH sH hi7hi7CJmH sH hi7CJmH sH "ghhhhhhhhhhi&imirisiiiiiiiij jjjjj%j}j~jjjkAkYkZkBDzxx w0gd>$ w0hp@ V^`Va$gd?$ w0hp@ V^`Va$gdyg$ w0hp@ V^`Va$gdDj$ w0hp@ V^`Va$gd[ _lllllllmmmmn(046BH4>@nprz(*ŸzpzbWh?6CJmH sH h?h?6CJmH sH hOrCJmH sH h?CJmH sH hOrhOrCJmH sH hygCJmH sH hChyg6CJmH sH UhChygCJmH sH hChohCJmH sH h|RCJmH sH h'%h'%CJmH sH h 8ChhLeCJmH sH hhLeCJmH sH h|Rh|RCJmH sH  Glavini, A. Grani and M. Rosi () Intelligent tutoring systems research, development and usage, journal Edupoint- informacijske tehnologije u edukaciji, 1/I Stankov, S., V. Glavini, A. Grubiai (2004). What is our effect size: Evaluating the Educational Influence of a Web-Based Intelligent Authoring Shell?. In: Proceedings INES 2004 / 8th International Conference on Intelligent Engineering Systems, (S. Nedevschi, I.J. Rudas (Ed.)). Cluj-Napoca : Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 2004. 545-550 StatSoft (2004). Inc.  Electronic Statistics Textbook , available at: http://www.statsoft. com/textbook/stathome.html     *,4FR^lnpHJ68:<>@DFJLPRVbdhjȻұhh+Xjh+XUhChmH sH hChCJmH sH hCJmH sH hygCJmH sH h] h] CJmH sH h] CJmH sH hChDjCJmH sH h+pCJmH sH h?CJmH sH h?h?CJmH sH  DHJNPTV`bdfhj w0gd> 6 00P/R . A!"#$% L 00P/R . A!"#$% P0 < 00P/R :p-(. A!"#$% L 00P/R . A!"#$% P0 < 00P/R :pQj. A!"#$% L 00P/R . A!"#$% P0 L 00P/R . A!"#$% P0 < 00P/R :p[!. A!"#$% L 00P/R . A!"#$% P0 L 00P/R . A!"#$% P0 L 00P/R . A!"#$% P0 5 0/R :p>. A!"#5$5% n3`j!''j\ 6PNG  IHDR48)2tEXtSoftwareMicrosoft Office5qPLTE  """ >>>***???///:::===444,,,222777<<<&&&)))333;;;666'''555...HHHXXXGGG[[[TTTBBBRRRLLLOOOWWW@@@FFF___NNNEEEKKK^^^VVVSSS\\\CCCYYYAAAPPPMMMZZZDDD]]]IIIUUUJJJQQQppptttvvvaaahhh{{{www}}}uuulllzzziiiyyyssskkkjjjqqqooo```dddxxxeeecccnnn|||rrrbbbmmmggg~~~fffr+tRNS@f pHYs@@bCc[ cmPPJCmp0712Hs/RIDATx^} cGf=3sq7mܗq˘ܗ26AH-H#[q,U(cUUf}^x/#~+N~_Ia# X,hXK `@b-͂f4 $,hXK `@b-͂fݨ0譝ֽ(L~MA'_$h+;|~:>c*I)ڄOz|RA4:tN:hUޥCVCk t;ZT|;Rw9T:}۳!5THѲ'^ 2׽>Wɕ|5ޞz,^0iuo6犾/;0K}ݍ:~ 7@~JGϦ7E ǧw| /5hR0`%nWOLj<:7>y{g VqnLy (c4E]i`mZ GO=d UHiT=[6 h *ٟ "nS/{|赭zt[tQ"h;kQ :™dTnY_'BYЁW2ؙ{[2? mF]@ώkd;^y7BG\N{|D =T}{*jY0GPPAүЩQr^~$limB׷گ'%z|bgVЍ-tWG )V- TZ0YF8/ @7c;mZZ샜?\ k5wAUߏ,aOmI.DOͿZ$'zKV%%zc^{9>( MҒViZ}-BoVi`6zfhO B I9-Xh'(h#% m+>*0z6Qowu mZrFϛ_)i\ַ%y8m{;@@hyo#BU NzV i\0kyBߎD_A}mEI&A͋'&>hUwA h]6h}5,>*G||"Tq@- ՎճwX3)rwܯ/_CWɘtDnD6/?Z!b[t50q{+xM^ OF?8PKۚb쿛Yzv KCMDKSSf_-# Ń׃ϻsa-e- "^?JPBM 6slic -9 nNʜޠ9 %#zj`^UcLj[^&PQ&Ais00N&6]C^ 7O CR*Bu]Ш=&Ф]EϪ7?A{z"پ mO^ j#s'Ф=EϤYn5&@tHըӛx^=:І&@NnW3~(S4C=EOHJMV hMhvw0iiZy @4'I?fFOl4IZE3._ @ggj* T0]Nݴ3 }B,o2P57$+8 , e"SLuUUTaM6#T"?OBA]PE?f&Br71w}F/^Gڅ*@ ?zmApܨhC-3z-S?lLύm񽪯>D9#H݂HU^ @Ƙz6WmIh9HtH@A;w r~5t=XB=zNق'zXon!*, …+퉘B3b7ܯ6 wGgRSqz_*b)K?* 4FkQ<@C9-mɻ]3f+R$V]~8eĉ`؁5 `A# 5ܵ鍄X.B:7obMJiH4ivԕ͂hV=u gNݾE3A] _,6i)w.vZGh`69{[o>ePP-JO$y{f*hw;To3r%' PA5:o[Bk{-k*:q?/vvu}ժLr{ˁ|26qHGl(@"@[ |a4{Ch6zfhvl ⶡ>[-~=fҼЩ=c?Fa5ho% svGϦan+]sDVt;i$K_Pt8|֏J2fZAS{,馬|hӲ?A7.!ѳi5u,o޺avݣUѽ-7? 򚀌?tip倇wkɚ9-_mX>b{.O(hk2;œfo؀m]4⛁+gRѕur=ـ"kT* gDPM髺 ZEQs펕`; WXܦco}?w>h n݊h|KcK7Qk朶@=Wf.x枫}z}6>LYFWu;F~pp֨$IBM n/uh?}uh5>?;?Ϻq-iВ=WX۞ 򃫙>A9S\ꍞem^P|K;ԇe|CGh'Mй;6[tۗ\R3t}PKOkdv~ Տ_N@&nbC᤽OJ&[&~x:^u~%С{]+wds,y>>ǂf`4 $,h[Z]Ѷm|c-1_kTGuUrȷd5WSco>oQ6=dܧA'i+]@ mrc Zku|${e̎zN+щK*zힺaȼ%5}~DžN/ D.Zrc-+͞P3q=.^krR]9`rp̙7M߉/8U2{+36~홻lv,Ԛsj&dmްlsEQƚKX9[ʺNEsK{9zlPΥ[Ȱ/,\ws}y𡕃Wl:S ]/v_q? ON# O:dԒyvq[}oL}Jg/Xŝ'F;xΗ8Ƿo=ֱpa]/oG]Vѥ=.A;9H!_{Рi_:krvG +RC.@YU4 .䛋˗u+GǑhӺ A2(y==Ak4w㖎)*'<9QFywo-˜pj?BT~~⌬}]o/*ttヽ[:rF؝DCV]yhuQJlڹ+~vui{{-ɳ{g}qݲ͋(>kGJ*_z|Xyܙ?asNl>q;9ƩMYf|9mw?;˲/W{` Ԭk^ɾ]k̚sjՙ_=yxB"t_~ve|9G6u^te1#gJF꙽Xt޺٪g.{4򌳭_YӐGrnZ!kݍ[:sиe;-[!£ad nm ْ LM;NƸ T7Gap @K؂f04 $,hXK `@b-͂f4 $,hXK `@b-͂fA rl: b gC/^qȩ%#oSIS9a\E[9r19C[]Q]}.? ֕螲|1A;s⠩@ KgP΃g@s$⼉~>##ÇpqSHHR0|x GKcaZ"pLr|@jo!flX <&Y!``1RM$苍 5Khd,beCBE\Q+w b6ضl Ӭ!,TΩNԒ hD7Ќ!n\*է48J~[ F͈p<<')r8@#g(n>Jt2O4z8{r")Y_qHes0ù8J ܎'y0i+u `c 1l#!#Btz3ٝ)TM%͋  `@YH2DhRl;NJeVWt'+7xI<N#UzƒQ&ȳy$bu@SʠѤ!NsQ8gHnGpdB 6PY"ɑ`!<3%9 A;A~%"v84zb|F%&Aeϙ0r8 h]DAU cGIt45;9@',U9*-hҞ5WR9(*cDGpK%QKV5Qx!`iRe89 à4Ɖy JR K?:Q}4AtE%FR !\< ' "MW23 $vF,$22s%. y̍z=$ƦաWu0C;K<@Kdd2juh*vgl\b!hU9wZGS&h,UrOo5O]f)h 44K h+* AAHf3zp-Q{pOp7_)чQà1ҸEWK.L]2z-u<͝U(wҐ`R(oKsKPHVSAVg4IXϱ$$Bώ$ۏ~ sNc!"(-a-$@J"өXb5brV +PMx#QK<U4'bP2 ɀr ֆA(O5j%c̙R7Q}Ð׉wC<</$ %I9?6wu4,XFIԁMQdウN^UI(= ؞ilh;x$Gx(!ҜI;-emyާf*vfnN#UXHg ش -X}>ӹ: sTAP4xa$ ұfr3aBROJx2.9`͝N.H]4-h FWlbQᤲ09ƚ)%@&UQgLj,J#TV=# Q5DK_bC$Fi(,i)%(1k^|nol7 q"_/%9ԁ0MHH$0{"L]%pTulMK0F05gBkFI\7>h,$&xC&[~EEXQc؀och4vJPH@y~,@m!"WZ(bL-l^*n§btJ5Aɇ`JFOYfff!ԠX]G,hdh#)"Ik>heKp 1IJrn:@@y.&툀xMþFNJ|909bӾ0daHNq4f1.Sf>IDrިlƇp 5X4aՐ;j0lra45y7q3!>2qX| ?F4 $҂M f~%+3tO8WoIMMeipNR}N-H Q2|y1Yn48\1gM 2-^,iiZ7 _0cJS1K[BQX9f耵MZhYYD݅ G-tF?V`_HhF>Җ/h,C$=$@ + PyiR3<"]48@-3l`:Y){F]Qj_4c@B# ':>҃bPFqhW,h hr2$H-4@V AӜ"|,s-AJS8i p92n^7ywW25ry^NbZFLrJO#~\9eQ?F$FUeil #sV' 8M͸|BPư6u6TA&5OARZE"4ܸ_oТЉC=\xyz\5e(seO/I #v$@4]"aCC*y;F5V)r9 uq#GF ,U\x}%ځġ}c&ljobF1Ac~GD4/H")!Xp^ynpz]l]qZ0x&5j ,Bɘ;*@bZC;ЀxV+gjּBW1l03hh5Tk -]sQ <݄-ޔ2y$Dtă|BHSx5 xی@'o4-F4 $,hZk3Y t'@@c!!~‚HE0[s@-~ OeB|8 Ұ𨕌4Z^}"E˂fn %]QU;WE,N,MvbJM) yQ +"IZ1_T}0Ì8Xcӂ29^ƺI4Hb,E<(:,U0dHmUbbW€uҴ&KDG䧛 XZrɄcBՅUȶz 1m`?gDγihb0u[W4 [D!f~&cixdpK8Y">HcK0)#2K)RiD!jlbϠs{3Vc~x>o98=4lX^a)rK,33Yd fro1S"*(~|Cy r!]XE0wJCcas+gTtR$ܘߋe5#7{,wUy0T1=F%0Af.1MokA3AW $!a-R``,h+44ȪJ$>.c;8'.Q!<+2d )cMVslaŚu A.΃‚4cQ*S긬 . em̨&J [҂N}Vrb#s^†tAِe>.|2 /t8 Xt`0(,련r`7ѴRqx-nɫ.rOIB"Z2„5SMʜUf:0>([W 9R\IsnDВE GHqQ^8b0$muPFd,  (rD;",#ӃFT"PetJC*䮪[ d\JLer 'm$y30P6ш.PCCQ6b;, E5q^ ÜmyNגSdd"-;E$bexZK cd|^eZ_,hZ0^8LpĂf-Д/NT^Pq87Pߌ$ҐTKt3EPՙB+ FiFVFbx`3#]&Ґh-8@K4 "ˬ8/~axKE`M\ /hl`47R!7_nVi4! 3`Sg852T>w˜24Uq@e'R=MJmF X%'#,hK>'F"=K ÀTV\'ހBMlc 4D4$YX/NA^+=Ps&XQ_JA*d3q )Zr\O-Hu˂f͂f4 $,hXK `@b-͂f4 $,hXK `@b--A^g&xnP]D[@<ϩEs@I{KnE®%7:P X}^.0Ob_ZrӻlؖAsvy"64@Z>a%^f~Wl[7~}lbٵtv^`IFo"R10- $n C!]*߹6W {]=xg>. E od <L6h 9,RFχ(BՅ%J"hFK 2I\h[ʕ+jNê .Ȉ](P^n")Bq"FQ3x \@cvL +tw#629?<ڪ-&c$Ա] &mob>6(N'1 Llƴ)/N$s6ѻb1ٽ弉},܆Oȩ14xC:LH_siҨdEKwfCpZ&Uu, V2.p o+_jMJ("gM*ia&&~uA~!c7OD`.v Wpébv̳Bn})@pLH?@5 R̼,@z{!,MvO;%3$Fd'3NHD)QHu"b𭝺h,y< ҠF*;4#RCup5d?h!ul-hgA `@b-͂f4 $,hXK `@4XK粦IENDB`$$Ifl!vh5I5' #vI#v' :VZl t0p#65I5' al$$Ifl!vh5I5' #vI#v' :VZl t0p#6,5I5' al$$Ifl!vh5I5' #vI#v' :VZl t0p#6,5I5' al$$Ifl!vh5I5' #vI#v' :VZl t0p#6,5I5' al$$Ifl!vh5I5' #vI#v' :VZl t0p#6,5I5' al$$Ifl!vh535=#v3#v=:VZl t0p#6,535=al$$Ifl!vh535=#v3#v=:VZl t0p#6535=al$$Ifl!vh535=#v3#v=:VZl t0p#6535=al$$Ifl!vh535=#v3#v=:VZl t0p#6535=al$$Ifl!vh535=#v3#v=:VZl t0p#6535=al$$Ifl!vh535=#v3#v=:VZl t0p#6535=al$$Ifl!vh55#v#v:VZl t0p#6,55al$$Ifl!vh55#v#v:VZl t0p#655al$$Ifl!vh55#v#v:VZl t0p#655al$$Ifl!vh55#v#v:VZl t0p#655al$$Ifl!vh55#v#v:VZl t0p#655alDd 0  # AbJmP*Љ m& QnmP*Љ mPNG  IHDRT/zsRGB pHYs+IDAThC[ Ek42 J/(؆m^~ޫ!hJRf%i4<62IlP"oISM~'AS ؂њu  cWNC&hqCX7=y7/qNJ0ÜmlZ-K6XPsX3 XM$h>VNU尽`Exf,: y X$:x٫E :xSfy^2kק`ȍxGyDvY+fI%ACKG]c:sct{k@ 8_|K]w{<—72 (ZWWA䮷\x=k/a/!jp0;[[3$` IENDB`]<@< NormalCJ_HmH sH tH F@F Heading 1 $1$@&CJ(htH uZ@Z Heading 2&$$$d%d&d'd@&a$5CJ0>@> Heading 3$$@&a$CJ$F@F Heading 4$$ ~# @&a$5V@V Heading 5&$$$d%d&d'd@&a$5V@V Heading 6"$@& w0hrp5CJ^@^ Heading 7*$$ w0hxx@&a$5CJd@d Heading 8($$ w0hrp@&a$5CJ4OJQJ\ @\ Heading 9( $@&  w ww w%5CJ DA@D Default Paragraph FontVi@V  Table Normal :V 44 la (k@(No List @B@@ Body Text$ a$VP@V \ Body Text 2$ `0Ha$ 5>*B*HQ@H Body Text 3$ a$CJ4 @"4 Footer  9r .)@1. Page Number4@B4 Header  9r DORD _r1$^r`hmH sH tH utC@bt Body Text Indent2  w ww w%r^`r6CJpR@rp Body Text Indent 2*  w ww w%^6CJtS@t Body Text Indent 32  w ww w%r^`rCJ0U@0 Hyperlink>*B*DT@D Block Textx]^dM@d Body Text First Indent$ x`a$N@a Body Text First Indent 2-  w ww w%x^`6CJ6"6 Caption xx52?@2 Closing ^88  Comment TextCJ$L@$ Date @Y@  Document Map!-D OJQJ8+"8  Endnote Text"CJd$@2d Envelope Address!#@ &+D/^@ OJQJF%@BF Envelope Return$ CJOJQJ:R:  Footnote Text%CJ: @: Index 1&^`: : Index 2'^`: : Index 3(^`: : Index 4)^`:: Index 5*^`:: Index 6+^`:: Index 7,^`:: Index 8-^`:: Index 9.p^p`B!bB  Index Heading/ 5OJQJ4/@4 List0^`82@8 List 216^6`83@"8 List 32Q^Q`84@28 List 43l^l`85@B8 List 54^`:0@R: List Bullet 5 & F>6@b> List Bullet 2 6 & F>7@r> List Bullet 3 7 & F>8@> List Bullet 4 8 & F>9@> List Bullet 5 9 & FBD@B List Continue:x^FE@F List Continue 2;6x^6FF@F List Continue 3<Qx^QFG@F List Continue 4=lx^lFH@F List Continue 5>x^:1@: List Number ? & F>:@> List Number 2 @ & F>;@> List Number 3 A & F><@"> List Number 4 B & F >=@2> List Number 5 C & F h-Bh  Macro Text"D  ` @ OJQJ_HmH sH tH lI@Rl Message Header.En$d%d&d'd-D^n`OJQJ>@b> Normal Indent F^4O@4 Note HeadingG<Z@< Plain TextH CJOJQJ0K@0 SalutationI6@@6 Signature J^BJ@B SubtitleK$<@&a$OJQJT,T Table of AuthoritiesL^`L#L Table of FiguresM ^` L>@L TitleN$<@&a$5CJ KHOJQJB.B  TOA HeadingOx 5OJQJ&@& TOC 1P.. TOC 2 Q^.. TOC 3 R^.. TOC 4 S^.. TOC 5 T^.. TOC 6 U^.. TOC 7 V^.. TOC 8 W^.. TOC 9 X^fOf 6,Bullet.Y d<7$8$#6B*CJOJQJ]^JaJphj@j 6, Table Grid7:VZ0ZLOL |+Figure[$1$7$8$a$CJOJQJ^JaJ\O\ |+Body Text 2 Char#5>*B*CJ_HmH phsH tH e j % %277eee8P*x>?mnoBCDEDE ~  C D K V W ] f g o hijkCD9:K L R ^ _ y !!!!d"e"v"""" #!#4#%% % % % % % &&)) )])^)-+.+,,.....l/m/1122222222223 3G394:4O4t4+5h556P6H7I7]77777777K8L8y:z:{:::U<V<>>>@@CCDDHHxJyJwLxLNN.P/P0P>P?PiQjQ\R]R^RRRRSSSSSTTUUDWEWXXXXXYYY6ZZ[[=\p]N^f_``aXbcdeeeeee e e eeeeeeeeee0000000000000000000000000000000000Y0 Y0 0 Y0 Y0 0 Y0 Y0 0 Y0 Y0 0 Y0 Y0 0 0000000000000000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00000000000000000000000000000000 0 0 0 0 0 0 0 0 0 000000 0 0 0 0 000000000000000000000000000000000000000000000000000000000000000000000000000000000@0I00@0I00@0I00@0I00@0@0I00\ 00|X >?mnoBCDEDE ~  C D K V W ] f g o hijCD9K L R ^ _ y !!!!d"e"v"""" #!#4#%% % % % % % &&) )])^)-+.+,,....l/m/1122222222223 3G394:4O4t4+5h556P6H7I7]77777777K8L8y:z:{:::U<V<>>>@@CCDDHHxJyJwLxLNN.P/P0P>P?PiQjQ\R]R^RRRRSSSSSTTUUDWEWXXXXXYYY6ZZ[[=\p]N^f_``aXbcdeeeeeee0000000000000000000000000000000000Y0 Y0 0 Y0 Y0 0 Y0 Y0 0 Y0 Y0 0 Y0 Y0 0 00000000000000000000000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 @00000000000000000000000000@0000 0 0 0 0 0 0 0 0 0 000000 0 0 0 0 I00900000000000000000000000000000000000000000000000000000000000000000000000000000@00 0@0 0  W(<+#2;+>C]xdg_l*j7:=?EGJPSW[]`acgEDfh!^)*d++ ,. /5;;<9=H@@yCwUZ^2cjDj8;<>@ABCDFHIKLMNOQRTUVXYZ\^_bhh9.!_R$Rq@Y/,?|R$UsefphL\NApb$.8VS!O2b$]%PiE)s/b$`j!''j\ 63K 0e0e     A@  A5% 8c8c     ?A)BCD|E|| "0e@       @ABC DEEFGHIJK5%LMNOPQRSTUWYZ[ \]^_ `abN E5%  N E5%  N F   5%    !"?N@ABC DEFFGHIJK5%LMNOPQRSTUWYZ[ \]^_ `ab@ (  P   # At#" `B S  ?e }#;$3D#" >0PjQR\R^RRSSXYY[[r[=\p]N^f_4````aXbcddeeeee e e e eeeeeeeeeeeeee e e e eeeeeeee |`p,C}HfB~}Ajp<@>9h~8R>"7n6`p:?n>5t oYgxT^`.^`.^`.^`. ^`OJQJo( ^`OJQJo( ^`OJQJo( ^`OJQJo(hh^h`. hh^h`OJQJo(hhh^h`OJQJo(hHh88^8`OJQJ^Jo(hHoh^`OJQJo(hHh  ^ `OJQJo(hHh  ^ `OJQJ^Jo(hHohxx^x`OJQJo(hHhHH^H`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hH^`o(. ^`hH. pLp^p`LhH. @ @ ^@ `hH. ^`hH. L^`LhH. ^`hH. ^`hH. PLP^P`LhH. ~}|t Yg                   rCg#|# " 0 ] ] z +p7b;C6,0^ U![!t3"'%-(q*|+^.00K1l3(7q;,=?U=-.@u|@/~@KAVXB kB"C 8CbCEvG%cHfIJhOPwHPQ-Q7QL5SNT"U+X,]^[_`_/K`- bnOb+chLeMeagygohDjQj'l m7moRWpIrOr{8vtw*xYtx3J}6#Y$hK6XmgR3N(&a~o$i79gH'mM. C D K V W ] f g o hijCK L R ^ _ y !!!d"e"v"""" #!#4#%% % %& )])^).2222223 3G394:4O4H7I7]777CHKLee eeeee33333333@..:8..@^R^Sgbe@@@@\@@@Unknown Gz Times New Roman5Symbol3& z Arial]WP IconicSymbolsAMT ExtraA& Arial Narrow5& zaTahoma?5 z Courier New3Times;Wingdings"1hFFoTF U3U3!4ddd2QHX? kB2 :INSTRUCTIONS TO AUTHORS FOR THE PREPARATION OF MANUSCRIPTS Martin Ruck Ani Grubisic<         Oh+'0, DP p |  <INSTRUCTIONS TO AUTHORS FOR THE PREPARATION OF MANUSCRIPTS Martin RuckNormalAni Grubisic13Microsoft Office Word@e@4@]{@ث{U՜.+,0D px  #Elsevier Science3d ;INSTRUCTIONS TO AUTHORS FOR THE PREPARATION OF MANUSCRIPTS Title  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~Root Entry FŔ{Data 1TableWordDocumentZSummaryInformation(DocumentSummaryInformation8CompObjq  FMicrosoft Office Word Document MSWordDocWord.Document.89q