Editing tool for creating virtual reality contents in the web environment
Editing tool for creating virtual reality
contents in the web environment
Development and research of VR contents editing tool
in web environment, that is, browser-based environment
In the case of virtual reality-based content for legal education, there are situations in which partial revision is required according to the revision of the law.
Since VR content that has already been produced is built by the development authoring tool and provided as an executable file, if you want to partially modify it, you need to modify it through the creator or by handling the development tool yourself. In addition, there are situations in which it is necessary to change the order of the content according to the educational curriculum, but it is difficult to modify it except for professional personnel.
Therefore, for web environment, that is, browser-based virtual reality content production, 3D mesh rendering environment and interaction arrangement, various voice-based guidance and guidance, main information visualization, and sound setting should be supported to express realistic environment. We developed and researched editing tools that applied UI/UX considering user convenience so that users could directly edit VR content.
● Because experiential virtual reality contents are provided in the form of finished products, it is difficult to actively respond to changes in laws and educational curriculum changes.
● When developing VR contents, if it is produced and distributed using a program that users can edit directly, users can actively and immediately reflect when laws are revised.
- For editing and VR execution of VR contents running in the web environment, the first technology required is the rendering of 3D data in the web environment.
- Basically, as a method for rendering 3D data in the web environment, we want to apply the WebGL-based rendering method, HTML-based three, and A-Frame that provides an expandable structure to js.
WebGL rendering pipeline
- In order to separate the functions of execution and editing of VR contents, the login function is applied to support permission setting, and according to each access rights, VR contents are classified into an experience function and a function to edit VR contents.
- The VR contents experience function supports Cross-Platform VR to support various VR devices, and applies the technology to enable VR even in environments without VR headsets, that is, mobile and desktop.
- In order to apply the VR content editing function, the main function of the editor is implemented with a React-based structure, and a NoSQL environment is applied to the database for simultaneous processing of large amounts of 3D data used for VR contents.
- Scenario editing, which is the core of the contents editing function, must support the ability to operate VR contents scenarios by managing, creating, calling, deleting, and executing scenarios.
- A scene in which the 3D space and shape of the scenario are laid out is intended to support editing of functions for placement, sound settings, and interaction method settings for the location and information display of the experiencer.
- Functions applied to all research methods are configured so that multiple access processing is possible in the web environment, and each support function is defined so that the operational characteristics can be well configured in the web environment.
Web-based VR contents editing tool configuration
- The main functions required for VR contents editing in the web environment include a meeting function for managing the rights of connected people, a function for managing the created scenario, and setting and scene configuration for creating a new scenario.
- The member management function has implemented detailed functions to support member sign-up and access and authorized member login/logout, and developed a function to manage access rights by applying account and access log management functions for member management I did.
- For the created scenario, MongoDB was configured to simultaneously process large amounts of 3D data, and REST API for inter-application operation was applied. To manage the existing scenario data, we developed and applied the delete, load, and run functions.
- The scenario setting function is composed of UI/UX for editing each scene included in the scenario, moving, zooming, and reducing functions to check the flow of scene registration and editing scenes, and a quick save function to prevent data loss during work implemented and applied.
Scene editing function
- Scene, which is the detailed unit of the scenario, applied the preview of registered 3D data and the movement and axis rotation functions to designate the camera, that is, the user's point of view, and applied the image management function to apply the information and UI of each object.
- In addition, the default arrow shape to express each scene and movement between scenes is applied, and a function to connect scenes and scenes is implemented in the scenario editing screen.
VR in the web environment utilizes limited resources to define and develop major functions for editing tools that support the creation and editing of various VR contents. In the case of actual 3D data, it was possible to utilize the 3D mesh produced by the pre-computation rendering technique conducted in the previous study, and it is possible to create and edit more realistic education and training contents through the optimization of the pre-computation rendering pipe.
Demonstration of results