diff options
-rw-r--r-- | doc/classes/ARVRAnchor.xml | 6 | ||||
-rw-r--r-- | doc/classes/ARVRCamera.xml | 6 | ||||
-rw-r--r-- | doc/classes/ARVRController.xml | 6 | ||||
-rw-r--r-- | doc/classes/ARVROrigin.xml | 2 | ||||
-rw-r--r-- | drivers/gles2/rasterizer_canvas_gles2.cpp | 8 | ||||
-rw-r--r-- | drivers/gles2/rasterizer_scene_gles2.cpp | 18 | ||||
-rw-r--r-- | drivers/gles3/rasterizer_canvas_gles3.cpp | 5 | ||||
-rw-r--r-- | editor/plugins/canvas_item_editor_plugin.cpp | 4 | ||||
-rw-r--r-- | editor/plugins/polygon_2d_editor_plugin.cpp | 1 |
9 files changed, 44 insertions, 12 deletions
diff --git a/doc/classes/ARVRAnchor.xml b/doc/classes/ARVRAnchor.xml index fa93d9668c..5b9188b171 100644 --- a/doc/classes/ARVRAnchor.xml +++ b/doc/classes/ARVRAnchor.xml @@ -5,8 +5,8 @@ </brief_description> <description> The ARVR Anchor point is a spatial node that maps a real world location identified by the AR platform to a position within the game world. For example, as long as plane detection in ARKit is on, ARKit will identify and update the position of planes (tables, floors, etc) and create anchors for them. - This node is mapped to one of the anchors through its unique id. When you receive a signal that a new anchor is available you should add this node to your scene for that anchor. You can predefine nodes and set the id and the nodes will simply remain on 0,0,0 until a plane is recognised. - Keep in mind that as long as plane detection is enable the size, placing and orientation of an anchor will be updates as the detection logic learns more about the real world out there especially if only part of the surface is in view. + This node is mapped to one of the anchors through its unique id. When you receive a signal that a new anchor is available, you should add this node to your scene for that anchor. You can predefine nodes and set the id and the nodes will simply remain on 0,0,0 until a plane is recognised. + Keep in mind that, as long as plane detection is enabled, the size, placing and orientation of an anchor will be updated as the detection logic learns more about the real world out there especially if only part of the surface is in view. </description> <tutorials> </tutorials> @@ -31,7 +31,7 @@ <return type="Plane"> </return> <description> - Returns a plane aligned with our anchor, handy for intersection testing + Returns a plane aligned with our anchor; handy for intersection testing. </description> </method> <method name="get_size" qualifiers="const"> diff --git a/doc/classes/ARVRCamera.xml b/doc/classes/ARVRCamera.xml index e74b435e11..aca8282a7c 100644 --- a/doc/classes/ARVRCamera.xml +++ b/doc/classes/ARVRCamera.xml @@ -1,11 +1,11 @@ <?xml version="1.0" encoding="UTF-8" ?> <class name="ARVRCamera" inherits="Camera" category="Core" version="3.1"> <brief_description> - A camera node with a few overrules for AR/VR applied such as location tracking. + A camera node with a few overrules for AR/VR applied, such as location tracking. </brief_description> <description> - This is a helper spatial node for our camera, note that if stereoscopic rendering is applicable (VR-HMD) most of the camera properties are ignored as the HMD information overrides them. The only properties that can be trusted are the near and far planes. - The position and orientation of this node is automatically updated by the ARVR Server to represent the location of the HMD if such tracking is available and can thus be used by game logic. Note that in contrast to the ARVR Controller the render thread has access to the most up to date tracking data of the HMD and the location of the ARVRCamera can lag a few milliseconds behind what is used for rendering as a result. + This is a helper spatial node for our camera; note that, if stereoscopic rendering is applicable (VR-HMD), most of the camera properties are ignored, as the HMD information overrides them. The only properties that can be trusted are the near and far planes. + The position and orientation of this node is automatically updated by the ARVR Server to represent the location of the HMD if such tracking is available and can thus be used by game logic. Note that, in contrast to the ARVR Controller, the render thread has access to the most up-to-date tracking data of the HMD and the location of the ARVRCamera can lag a few milliseconds behind what is used for rendering as a result. </description> <tutorials> </tutorials> diff --git a/doc/classes/ARVRController.xml b/doc/classes/ARVRController.xml index d3d6fce537..ccb55375d2 100644 --- a/doc/classes/ARVRController.xml +++ b/doc/classes/ARVRController.xml @@ -4,8 +4,8 @@ A spatial node representing a spatially tracked controller. </brief_description> <description> - This is a helper spatial node that is linked to the tracking of controllers. It also offers several handy pass throughs to the state of buttons and such on the controllers. - Controllers are linked by their id. You can create controller nodes before the controllers are available. Say your game always uses two controllers (one for each hand) you can predefine the controllers with id 1 and 2 and they will become active as soon as the controllers are identified. If you expect additional controllers to be used you should react to the signals and add ARVRController nodes to your scene. + This is a helper spatial node that is linked to the tracking of controllers. It also offers several handy passthroughs to the state of buttons and such on the controllers. + Controllers are linked by their id. You can create controller nodes before the controllers are available. Say your game always uses two controllers (one for each hand) you can predefine the controllers with id 1 and 2 and they will become active as soon as the controllers are identified. If you expect additional controllers to be used, you should react to the signals and add ARVRController nodes to your scene. The position of the controller node is automatically updated by the ARVR Server. This makes this node ideal to add child nodes to visualise the controller. </description> <tutorials> @@ -64,7 +64,7 @@ <member name="controller_id" type="int" setter="set_controller_id" getter="get_controller_id"> The controller's id. A controller id of 0 is unbound and will always result in an inactive node. Controller id 1 is reserved for the first controller that identifies itself as the left hand controller and id 2 is reserved for the first controller that identifies itself as the right hand controller. - For any other controller that the [ARVRServer] detects we continue with controller id 3. + For any other controller that the [ARVRServer] detects, we continue with controller id 3. When a controller is turned off, its slot is freed. This ensures controllers will keep the same id even when controllers with lower ids are turned off. </member> <member name="rumble" type="float" setter="set_rumble" getter="get_rumble"> diff --git a/doc/classes/ARVROrigin.xml b/doc/classes/ARVROrigin.xml index 80626338f2..55062ba3ae 100644 --- a/doc/classes/ARVROrigin.xml +++ b/doc/classes/ARVROrigin.xml @@ -6,7 +6,7 @@ <description> This is a special node within the AR/VR system that maps the physical location of the center of our tracking space to the virtual location within our game world. There should be only one of these nodes in your scene and you must have one. All the ARVRCamera, ARVRController and ARVRAnchor nodes should be direct children of this node for spatial tracking to work correctly. - It is the position of this node that you update when you're character needs to move through your game world while we're not moving in the real world. Movement in the real world is always in relation to this origin point. + It is the position of this node that you update when your character needs to move through your game world while we're not moving in the real world. Movement in the real world is always in relation to this origin point. So say that your character is driving a car, the ARVROrigin node should be a child node of this car. If you implement a teleport system to move your character, you change the position of this node. Etc. </description> <tutorials> diff --git a/drivers/gles2/rasterizer_canvas_gles2.cpp b/drivers/gles2/rasterizer_canvas_gles2.cpp index 14bd71c6eb..bbae3d3d22 100644 --- a/drivers/gles2/rasterizer_canvas_gles2.cpp +++ b/drivers/gles2/rasterizer_canvas_gles2.cpp @@ -1303,6 +1303,14 @@ void RasterizerCanvasGLES2::canvas_render_items(Item *p_item_list, int p_z, cons t = t->get_ptr(); +#ifdef TOOLS_ENABLED + if (t->detect_normal && texture_hints[i] == ShaderLanguage::ShaderNode::Uniform::HINT_NORMAL) { + t->detect_normal(t->detect_normal_ud); + } +#endif + if (t->render_target) + t->render_target->used_in_frame = true; + if (t->redraw_if_visible) { VisualServerRaster::redraw_request(); } diff --git a/drivers/gles2/rasterizer_scene_gles2.cpp b/drivers/gles2/rasterizer_scene_gles2.cpp index d9f1fe1212..e4783e907b 100644 --- a/drivers/gles2/rasterizer_scene_gles2.cpp +++ b/drivers/gles2/rasterizer_scene_gles2.cpp @@ -1245,6 +1245,24 @@ bool RasterizerSceneGLES2::_setup_material(RasterizerStorageGLES2::Material *p_m t = t->get_ptr(); + if (t->redraw_if_visible) { //must check before proxy because this is often used with proxies + VisualServerRaster::redraw_request(); + } + +#ifdef TOOLS_ENABLED + if (t->detect_3d) { + t->detect_3d(t->detect_3d_ud); + } +#endif + +#ifdef TOOLS_ENABLED + if (t->detect_normal && texture_hints[i] == ShaderLanguage::ShaderNode::Uniform::HINT_NORMAL) { + t->detect_normal(t->detect_normal_ud); + } +#endif + if (t->render_target) + t->render_target->used_in_frame = true; + glBindTexture(t->target, t->tex_id); if (i == 0) { state.current_main_tex = t->tex_id; diff --git a/drivers/gles3/rasterizer_canvas_gles3.cpp b/drivers/gles3/rasterizer_canvas_gles3.cpp index bc1883b09d..06b84aeab4 100644 --- a/drivers/gles3/rasterizer_canvas_gles3.cpp +++ b/drivers/gles3/rasterizer_canvas_gles3.cpp @@ -1144,6 +1144,11 @@ void RasterizerCanvasGLES3::_canvas_item_render_commands(Item *p_item, Item *cur void RasterizerCanvasGLES3::_copy_texscreen(const Rect2 &p_rect) { + if (storage->frame.current_rt->effects.mip_maps[0].sizes.size() == 0) { + ERR_EXPLAIN("Can't use screen texture copying in a render target configured without copy buffers"); + ERR_FAIL(); + } + glDisable(GL_BLEND); state.canvas_texscreen_used = true; diff --git a/editor/plugins/canvas_item_editor_plugin.cpp b/editor/plugins/canvas_item_editor_plugin.cpp index b0e812a130..86d6799b19 100644 --- a/editor/plugins/canvas_item_editor_plugin.cpp +++ b/editor/plugins/canvas_item_editor_plugin.cpp @@ -573,10 +573,10 @@ bool CanvasItemEditor::_get_bone_shape(Vector<Vector2> *shape, Vector<Vector2> * Node2D *from_node = Object::cast_to<Node2D>(ObjectDB::get_instance(bone->key().from)); Node2D *to_node = Object::cast_to<Node2D>(ObjectDB::get_instance(bone->key().to)); - if (!from_node->is_inside_tree()) - return false; //may have been removed if (!from_node) return false; + if (!from_node->is_inside_tree()) + return false; //may have been removed if (!to_node && bone->get().length == 0) return false; diff --git a/editor/plugins/polygon_2d_editor_plugin.cpp b/editor/plugins/polygon_2d_editor_plugin.cpp index 1ac7ee89d6..1d7b4ffa41 100644 --- a/editor/plugins/polygon_2d_editor_plugin.cpp +++ b/editor/plugins/polygon_2d_editor_plugin.cpp @@ -482,6 +482,7 @@ void Polygon2DEditor::_uv_input(const Ref<InputEvent> &p_input) { polygons_prev = node->get_polygons(); node->set_polygon(points_prev); node->set_uv(points_prev); + node->set_internal_vertex_count(0); uv_edit_draw->update(); } else { |