robert.malzan
Forum Replies Created
-
AuthorPosts
-
May 15, 2025 at 4:32 pm #1551
Currently we only support WebRequests (REST API, based on TCP/IP), but you are free to add your own code to the World Builder. Are you sure this is what you want to do? May I know what your are planning to do? Because my guess is that you want to tell the Hopper to show a hand and grab things via exoskeleton control… am I right? This would have been a nice topic to discuss in our bi-weekly meetings.
May 15, 2025 at 4:23 pm #1550You c an only change location when you actually published your work to a server somewhere. What I do to test this feature is I publish to http://localhost and copy the files to my XAMPP Http server to xampp/htdocs/. Remember to actually start your XAMPP or MOMP server for this to work!
In your change location node, use “external reference” and set the change location call to something like
http://localhost/my_next_location.vrml
(use the name of your next location).
This should do the trick.
May 14, 2025 at 10:49 am #1545Internally all non-glb models are converted into glb models. If you replace a glb with another glb, the functionality should be the same.
I’m not sure I understood you correctly. Maybe you could describe your workflow and what you’d like to happen.
May 14, 2025 at 10:46 am #1544The standard button size is maybe a bit too large. So it looks huge when you come closer. You could rescale it or use a button of your own. And, like I said, you could use any object and trigger it when you just touch it. As a reminder: a trigger zone can automatically take on the shape of the object.
Of course, the gaze node is another option. Store the gazed-upon value into a trigger variable. Combine it with a delay node. Check again after the delay if you are still looking at the same object as stored in the trigger variable, and if yes, trigger it.
May 13, 2025 at 7:32 am #1541Thank you Boris! Your input is greatly appreciated. Currently, we’re thinking of having a ‘map’ overview where we only display nodes as square boxes (no rounded corners) and without the connecting splines so you can navigate more easily to the set of instructions you are looking for. Then, we could actually suppress drawing splines for connections which are completely above, below, to the left or to the right of the view window. We’re slowly getting there…
May 13, 2025 at 7:17 am #1540The ‘heavy’ feeling should go away after a short while because the system still does some stuff in the background in the first minute. Anyway, that’s been my experience. If you want to touch objects, you can set up a trigger zone around the object which you can then respond to in the Logic Editor (node editor). You could also make a button from your object since buttons can have any shape.
There is a raycast node in your repertoire which may be used to do raycasts to, for instance, select an object by looking at it. In our next release, the raycast node will also get a position/direction input, so the raycast can originate from any arbitrary point instead of only the camera.
May 13, 2025 at 6:25 am #1539Hello Sametk13,
could you be a bit more specific as to how you would solve this algorithm in a Unity environment? I already pointed you to the hand hierarchy and the XR Hand Skeleton driver. What I need to know is how exactly you would recognize the gesture if you had all the position and orientation (rotation) information of both hand skeletons.
The
XRHandSkeletonDriver
actually has a publicList<JointToTransformReference> jointTransformReferences;
which you can use to detect all the information you may need. In your code you need to beusing UnityEngine.XR.Hands
to have access to theXRHandSkeletonDriver
Currently, I am assuming you’ll create a specific Node which reads both hands and resolves the changes to the hands/fingers as a gesture which you can then output as a text which you can then display on a screen.The Node script is added to the World Builder who will compile it at runtime. The same (Node) script will also be running on the Portal Hopper’s side to actively interpret and display the detected hand signals. Correct?
May 12, 2025 at 1:47 pm #1534I found out what’s going wrong. Look at the graph in the area commented as “Room 2”. There are several Update nodes which go directly into the rig. But to route flow into the rig sets the rig position (at least 60 x per second for each Update node) which drives the rig / Avatar position crazy. So, what you saw was the Avatar shooting up into the sky… To read the rig position/direction, you can just read out the values without going into flow.
I would send you an image, but we still have the image posting problem…
If I was unclear or you have any other questions, please let us know.
May 7, 2025 at 3:56 pm #1523Yes, the “Hand” node gives you the hand position and direction. What Physics are you missing? You can assign gravity to an object if that’s what you are asking. The assign gravity, select the object in the Location Manager and select an object type of “grabbable object”. Then you get more options so you can select physics and gravity for the object. Physics means it can collide with other objects and gravity will let the object drop when it is not resting on a surface and not in your hand (grabbed).
May 6, 2025 at 9:55 am #1521Yes, but you can prevent the lagging if you divide your project up into groups (select a number of nodes and create a node called “Group” which will hide the nodes inside and automatically create connectors to the logic tree outside of the group).
We already optimized as much as we can, except for a LOD logic which we plan to integrate eventually. The LOD logic will reduce the drawing load in the way you described.
You wrote: “If they (connection lines) are, please consider turning them off while they are not in view” Consider this: Is a line out of view if their start and endpoint is out of view? The answer is no. 😉 If you think about it, the complexity is mind-boggeling…
May 6, 2025 at 9:48 am #1519We love this idea and will discuss it with cellock. But good things take time and great things take a lot of time.
May 6, 2025 at 9:46 am #1518Unfortunately not (yet). It’s on our ToDo List however. For now you can check using IsIdentical against all scene objects but that can quickly become tedious, I know. Patience… 😉
May 6, 2025 at 9:40 am #1513I had a look at the product but I also couldn’t find the corresponding link. The link should be a button with an internal reference like this:
hopper:https://xr4ed.cellock.com/product/b2bde9a0-1595-40f0-b260-5cdd006d1998/Greenhouse.vrml
The hopper will automatically start when such a link is encountered and interpret/run the vrml file.
Maybe you can check with Neofitos (nvlotom@cellock.com) from cellock to find out where the button is located.
-
AuthorPosts