||Wright-Patterson AFB, OH 454337905
The US Air Force has invested heavily in a training and rehearsal concept called Distributed Mission Operations (DMO) and is working on augmenting and integrating DMO in a new construct called blended Live, Virtual, and Constructive (LVC) training and operations. Distributed training provides the virtual and constructive (synthetic) elements of the new construct. While the Air Force is interested in integrating live, operational systems into a learning-managed,realistic, adaptive, and affordable environment enterprise for training, rehearsal, test, and evaluation. The enterprise allows local and wide area connection and potential blending of virtual simulators, constructive models, gaming environments, and relevant live operational systems, such as actual aircraft or ground command and control systems.
The success of distributed training, to include the synthetic components, hinges on several critical research needs that must be addressed: (1) methods and tools to specify, manage, and track mission needs and critical knowledge, skills, and experiences at appropriate levels of analysis; (2) guided and instructionally-valid training objective and scenarios design that ties content to the specifications; and (3) construct-oriented, validated systematic subjective and objective methods and measures to predict, diagnose, monitor, and assess the performance of trainees within environments. Defined and integrated methods and measures must assist in the prescription of content and remediation within DMO and LVC ops to address knowledge and skill deficits and to help develop a new class of human performance and machine learning based models. Methods, models, and metrics that are linked to training objectives and that permit routine and longitudinal assessments of individual and team motivation to learn, engagement in the learning process, engagement in the mission and key tasks, performance in training, and performance and proficiency in synthetic environments and operational settings must be developed and validated. The nature of the performance space and the grain-size of the actual data coupled with the unstructured, complex big data issue that underlies human performance assessment makes this a significant area of research. Appropriate identification, integration, and validation of ways to best blend complimentary trainers and environments that promote, maintain, and/or accelerate learning and performance must be developed. There are also significant research opportunities to develop, implement, and evaluate innovative methods to link and represent core knowledge, skills, and experiences in a way that helps define the environments for learning and which facilitates training development, delivery, evaluation, and transfer.
We are also interested in developing and validating criterion measures related to the impact of DMO and LVC on learning itself, proficiency, transfer, and readiness and that help to quantify intervals necessary for refresher training. There is considerable latitude for research that explores how best to manage some, most, or all of the learning enterprise. Research can include (1) understanding and quantifying the extent to which a learner or a team of learners is engaged in the learning process and what should be done to appropriately adapt the contexts for learning to promote engagement to learn; (2) improving the quality and precision of needs assessment, gap, and trade space analyses; (3) training/scenario design, delivery, and management tools; (4) integrating diverse approaches to training such as game-based systems and environments, intelligent and adaptive training environments, and part task trainers; (5) developing methods to improve the credibility and security of learning, data exchanges, and interoperability among the systems; (6) rapid prototyping of novel approaches to human performance monitoring, modeling, assessment, and feedback;(7) developing more precise and generalizable performance measurement and proficiency tracking data; to include the problem of integrating fine-grained, discrete pieces of data (both lab and real world) into meaningful composite measures; and (8) improving ways to represent, visualize and package feedback for after action reviews. It may also include application of different approaches/strategies to learning and assessment in a variety of integrated and adaptive environments and contexts including fixed location and mobile assessment and feedback venues.
Adaptive learning environments; After action review; Blended learning environments; Criterion development; Computational modeling; Needs assessment; Competency-based training; Training utility analysis; Distributed mission training; Learner and student engagement methods; Live, virtual, and constructive blended training; Game-based training and learning environments; Training and learning; Knowledge assessment; Machine learning models; Modeling and simulation; Performance modeling; Skill development; Proficiency assessment, monitoring, diagnosis, and prediction; Standards for interoperability; Training effectiveness; Training transfer