Advertisement

Tiles render a black margin depending of screen size

Started by January 24, 2022 04:19 PM
3 comments, last by buxx 2 years, 9 months ago

Hello !

I'm totally new to your community and happy to join !

I have a rendering problem since long time and I don't know how to solve it. I develop roguelike game where I display a 2D TopView map based on a simple png file like this :

test.png

But, with different game lib, I have the same problem (it seems to be not on all computer) when a resize the windows. A black border is sometime visible (depending windows size).

rendering bug

I think it is “float” related. Tiles position, size, etc. are floats. Maybe float usage by gpu make pixel selection “error” on source texture ? I don't know. Any idea about what happens ? And how to solve ?

If you want to try it, the code is in Rust:

Cargo.toml :

[package]
name = "demo_squads"
version = "0.1.0"
edition = "2021"

[dependencies]
macroquad = "0.3"

src/main.rs :

use macroquad::prelude::*;

#[macroquad::main("test")]
async fn main() {
    let tile_size = 32.;
    let world_size_tiles = 64; // square world
    let world_size = world_size_tiles as f32 * tile_size;
    let texture = load_texture("test.png").await.unwrap();
    let background_tile_source = Rect::new(0., 0., tile_size, tile_size);
    let foreground_tile_source = Rect::new(32., 32., tile_size, tile_size);

    loop {
        clear_background(BLACK);

        let zoom_x = (world_size / screen_width()) * 2.;
        let zoom_y = (world_size / screen_height()) * 2.;

        set_camera(&Camera2D {
            zoom: Vec2::new(zoom_x, zoom_y),
            offset: Vec2::new(-0.5, 0.),
            ..Default::default()
        });

        for row_i in 0..world_size_tiles {
            for col_i in 0..world_size_tiles {
                let real_dest_x = col_i as f32 * tile_size;
                let real_dest_y = row_i as f32 * tile_size;
                let camera_dest_x = real_dest_x / world_size;
                let camera_dest_y = real_dest_y / world_size;
                let dest_size_x = tile_size / world_size;
                let dest_size_y = tile_size / world_size;

                // background
                draw_texture_ex(
                    texture,
                    camera_dest_x,
                    camera_dest_y,
                    WHITE,
                    DrawTextureParams {
                        source: Some(background_tile_source),
                        dest_size: Some(Vec2::new(dest_size_x, dest_size_y)),
                        flip_y: true,
                        ..Default::default()
                    },
                );

                // foreground
                draw_texture_ex(
                    texture,
                    camera_dest_x,
                    camera_dest_y,
                    WHITE,
                    DrawTextureParams {
                        source: Some(foreground_tile_source),
                        dest_size: Some(Vec2::new(dest_size_x, dest_size_y)),
                        flip_y: true,
                        ..Default::default()
                    },
                );
            }
        }

        next_frame().await
    }
}

And Thanks for your time if you read me !

None

   let zoom_x = (world_size / screen_width()) * 2.;
   let zoom_y = (world_size / screen_height()) * 2.;

The zoom factor should be the same in both directions, unless you need to compensate for pixel aspect ratio (very unlikely) and it should be integer. Round it down and add a border around the slightly shrunk tiles; it might or might not be a good idea to display a variable number of tiles, more on larger screens.

Omae Wa Mou Shindeiru

Advertisement

buxx said:
I think it is “float” related. Tiles position, size, etc. are floats. Maybe float usage by gpu make pixel selection “error” on source texture ? I don't know. Any idea about what happens ? And how to solve ?

It looks like a rounding issue. Looking at your code the cause may be missing, i guess it's inside draw_texture_ex().
I guess you give only tile coordinates to that, and tile sizes are calculated in the function. (Or i fail to read Rust)
Now it probably happens that 4 vertices are calculated per tile individually, eventually using some unintended rounding inside the function, or causing it to happen later on GPU.

This code would be an example to make sure all vertices are exactly the same for all tiles sharing them: (C code, i try to keep it simple)

void DrawTile (int x, int y, // integer grid coordinates
	float scalingX, float offsetX,  // projection
	/*same for Y, texture or color, etc.*/)
{
	// calculate tile vertex positions in some clockwise order
	
	float v0x = float(x) * scalingX + offsetX; 	
	float v0y = float(y) * scalingY + offsetY;
	
	float v1x = float(x+1) * scalingX + offsetX; // we do integer addition of 1 to x BEFORE we convert to float, so the initial coordinates are always integers and match for other adjacent tiles sharing this vertex	
	float v1y = float(y) * scalingY + offsetY;
	
	float v2x = float(x+1) * scalingX + offsetX;	
	float v2y = float(y+1) * scalingY + offsetY;
	
	float v3x = float(x) * scalingX + offsetX;	
	float v3y = float(y+1) * scalingY + offsetY;
	
	// now drawing with those positions, ther should be no gaps
}

If you don't do this so strictly, tiny differences could cause mismatch due to triangle edges rounding on GPU.
Usually this does not happen because we use shared vertices for meshes or grids. But if you don't use shared vertices, you mast make sure they are equal for the same grid position.

Thanks for your replies ! I'm studying them !

None

This topic is closed to new replies.

Advertisement