Apertus
  • Documentation
  • Introduction
    • Definitions
      • Coordinate systems
      • Primitives
    • Features
      • Basic
        • Nodes
        • Light sources
        • Geometries
        • Primitives
        • Texts
      • Environment simulation
        • Water
        • Sky
        • Terrain
      • Browser
      • UI technologies
        • HTML UI
        • Presentation
        • Gallery
      • PointCloud
      • 360
        • 360 Images
        • 360 Videos
      • 3D Model Formats
      • Scene Sharing
        • Multiplayer
      • Video and Voice Chat
      • Hand Tarcking
        • Leap Motion
      • Head Tracking
        • Fob
      • Displays
        • Multi Display
        • Cave System
        • HMDs
      • Industry
        • IoT, and Sensors
        • Robot monitoring
        • Robot calibration
  • Developers
    • Development Cycle
    • Architecture
      • Project folders
      • Configuration ecosystem
    • API
      • C++ API
      • JavaScript API
      • HTTP REST API
    • Getting Started
      • Creating a plugin
      • Creating a sample
  • Contribute
    • Report a Bug
    • Suggest a Feature
  • Tutorial - How to visualize CAD models in VR
    • Introduction
    • Import CAD Models
    • Convert CAD Models
    • Create Low-poly from CAD Models
    • Create Photorealistic CAD Models
  • Plugins - Photorealistic Render
  • Plugins - Physics
  • Tutorial - How to visualize Tensorflow training in VR
  • Tutorial - Virtual Learning Factory Toolkit Gamification
  • Overview
    • Introduction
    • Architecture
    • Use Cases
  • Installation
    • Windows
    • Android
      • How to use
      • Writing an application
    • MacOS
  • Build
    • Windows
      • How to build the source on Windows
    • Android
    • MacOS
  • Plugins on Windows
    • Photorealistic Render
      • How to use
      • How to configure
      • Features
      • Sample
    • Physics
      • How to use
      • How to configure
      • Features
      • Samples
      • Demo video
  • Plugins on Android
    • Java Native Interface
      • How to use
      • Extending the API
    • Filament render
      • How to use
      • How to configure
      • Developers
  • Plugins on MacOS
    • Untitled
  • Samples on Windows
    • Deep learning
      • Untitled
      • Use the Fastai-PythorchVR Sample
      • Use the HTTP API
      • Create HTTP Requests from Python
      • Demo video
    • Virtual Learning Factory Toolkit Gamification
      • Installation
      • Lobby - User Interface
      • Local - User Interface
      • Student - User Interface
      • Teacher - User Interface
      • VLFT Gamification Session
      • VR Mode
  • Virtual Learning Factory Application
    • Installation on Windows
    • Installation on Apple
    • Lobby
    • Single Player
    • Multi Player - Student
    • Multi Player - Teacher
Powered by GitBook
On this page
  1. Samples on Windows
  2. Deep learning

Use the HTTP API

PreviousUse the Fastai-PythorchVR SampleNextCreate HTTP Requests from Python

Last updated 5 years ago

Using the API is very easy, if you have a model that learns, and calculates x,y,z positons that you can send wia HTTP request (you can learn how to make a request ).You run the sample either in debug or release mode. 2 windows should pop up: one is a command line in the back, the other one is wehere everything will be displayed

After that, you need to post a request with the number of classes, and the file names paths.The needs the be in json format ( you can see an example from python). If your data went through you should see something similar to this.

In this example we trained a model on cat and dog images. So the x and y axes are named appropriately. Right now all the images are in the same place because we haven't sent any data about their positions yet. The data with the positions must contain the x, y, z values, in the same order as their file names.

If you everything went alright you should see similar thing happening on your screen. The model in this example is 85% accurate so you can crealry see that most of the images on the left are dogs, and on the fight are cats. The further away an image to the left or to the right the higher the confidence of the prediction is. In this example Hight corresponds to loss (it means how bad the prediction is), because of this the cat images on the left are higher.

here
here