Image mapping : Peer review needed

The Oric video chip is not an easy beast to master, so any trick or method that allows to achieve nice visual results is welcome. Don't hesitate to comment (nicely) other people tricks and pictures :)
User avatar
jbperin
Flight Lieutenant
Posts: 480
Joined: Wed Nov 06, 2019 11:00 am
Location: Valence, France

Image mapping : Peer review needed

Post by jbperin »

Hi fellow Oricians,


During this summer vacation I took time to explore a topic that Dbug had once suggested me to explore and that I had allways considered as impossible: Image Mapping in real-time on Oric.

I want to share here the result of my works on the topic because I am a bit suprised by what I've discovered for now.

And I would love to receive comments, remarks by peers to let me know if I'm totally wrong or if I'm right when I start thinking that something is possible in that field.

Unfortunately I don't have any running program to show. I only have an idea of a method that could possibly make it possible to do image mapping on Oric in real time.

What I call image mapping is the fact to apply an image to a shape on screen which is not neccessarly rectangular.
Image

As far as I know, the only attempt that was done to do such thing was in the intro of 1337 game, where the intro text scrolls on the screen with a stunning perspective effect like in Star Wars movies:
https://youtu.be/62lJQJy-v6w?t=31

The reason why this topic seems attractive to me is because I have developped some 3D routines with glOric and, even though I have allways thought it was not possible, I've allways had the Doom stuff in mind and I've allways wondered what could be done to get as close as possible from wolfenstein on a 1 Mhz 6502 cpu.
The underlying question being : "Is there a space between 3D munch and wolfenstein ?"

In glOric, like in most 3D render engine, surfaces are made of triangles. And a rectangular image shall thus be mapped on two triangles.
In the image below I introduce this principle and I start positionning the problem on a single triangle.
Image

The problem I wanted to adress is the following one. If I want to fill the triangle P0, P1, P2 on the screen (right part of the image), I need to find for a given P point, the pixel coordinate of P in the original image (left part of the previous image)

The image below allows me to describe a little bit more accurately what I try to do:
Image

Thus presented, the problem consists in computing pixel column and line (Pc, Pl) in the image from:
  • its coordinate on screen (Px, Py)
  • relatively to positions on screen of triangle'summits: P0, P1, P2
The usual way to proceed with this kind of problem is to express the P point as a linear combination of two vectors colinear to P0P1 and P0P2. The coefficients of this linear combination are obtained by using the cross vector product.

The principle is well described here:

https://mathworld.wolfram.com/TriangleInterior.html

And Dbug had provided me with some example of fast implementation on 68000 processor :

http://s390174849.online.de/ray.tscc.de/fatmap.htm

http://www.multi.fi/~mbc/sources/fatmap2.txt

But I dislike this approach because it implies at least 4 multiplications for each pixel P in the triangle P0,P1,P2 .
I think this approach was first done for machine such as 286 processors and eventually successfully adapted to 68000 processors.
The main trick here is that this processors can perform fixed point arithmetic on 16 or 32 bits values to avoid floating point arithmetic while keeping a good enough accuracy.
But in my opinion it's not enough saving for our beloved 6502 processor. Because the clock frequency is far lower than on these processors and also because a 16 or 32 bits addition used in fixed point arithmetic can be performed on one instruction on processors such as 286 or 68000 but would require several addition with carry propagation to do the same on a 6502.

So I decided to find an approach based on trigonometry rather than linear algebra.
I had already done this for the projection routine in glOric and results was somewhat satisfying.
The main idea behind this approach is to avoid multiplication and find a method that rely on trigonometric operators because they can be pre-calculated in look up table.

I called:
  • l1 the line passing by P0 and P1
  • l1' the line passing by P and which parallel to l1
  • l2 the line passing by P0 and P2
  • l2' the line passing by P and which parallel to l2
I also identify points:
  • A which is at the intersection of l1 and l2'
  • B which is at the intersection of l2 and l1'
and angles
  • alpha between X axe and l1 or l1'
  • beta between X axe and l2 or l2'
as shown in the figure below:

Image

The interesting thing here is that:
- the distance ratio between P0 - A and P0 - P1 reflects the pixel's column in the image
- the distance ratio between P0 - B and P0 - P2 reflects the pixel's line in the image

Then I did a little bit of math to see where this idea could lead:

Image
Image

I know it might look a bit fat at first sight. with lots of sine and cosine and big division and multiplication.

But if we look carefully at how the result is structured and what it is composed of, we can find some reason to hope for an efficient implementation.

Indeed, r1' and r2' are basically addition of two terms.
One is a factor of (X0 - Xp) and the other is factor of (Y0 - Yp)
If we use an algorithm that fill the triangle P0, P1, P2 pixel by pixel from left to right and line by line, the quantity (X0 - Xp) will only vary by one between a pixel and its immediate neighbour.
and the quantity (Y0 - Yp) will only change when the filling algorithm goes from one line to the next one.

Moreover, all quantities that only rely on alpha and beta angles can be computed at the beginning of the frame render and remain unchanged during all the filling process.
It is the same for quantities bound the distance between points P0 and P1 or between points P0 and P2 because these distances remain unchanged during the filling process.

Thus, as surprising as it could appear, filling a triangle with pixels from an image, can be done with something like 4 additions at each step .. rather than 4 multiplications.

It is such an unbelievable result that I couldn't help telling myself that I must have missed something.

Then I wrote a little python script to check if my formulae were correct.

Image

And for now it looks like it works :shock:

I know there remain lots of work to get something really working on a real machine.
For now it is only a theory based on some equations.
I might have missed something important that would destroy all hopes ..
But Wow .. it never it really works .. It could rocks a lot !!!

So tell me what you think of this approach ?
Do you see any mistake in the reasoning ?
Do you have any suggestion on how to implement the stuff ?
User avatar
kenneth
Squad Leader
Posts: 515
Joined: Fri Nov 26, 2010 9:11 pm
Location: France PdD
Contact:

Re: Image mapping : Peer review needed

Post by kenneth »

Hello.
Using the Oric memory is maybe faster than calculation to fill a triangle, by saving the two Hires locations of the pixels of the triangle in the same line of the screen (segment), you need to swap the two values to put the lower in the first position. Then, when the triangle is drawn, just calulate the way to fill the segments with the gaps between these saved adresses, no spaces, just fill with missing pixels, a few spaces, fill with with full bytes, more, use color attributes to avoid the processor overload.
User avatar
jbperin
Flight Lieutenant
Posts: 480
Joined: Wed Nov 06, 2019 11:00 am
Location: Valence, France

Re: Image mapping : Peer review needed

Post by jbperin »

kenneth wrote: Fri Aug 28, 2020 11:35 am Using the Oric memory is maybe faster than calculation to fill a triangle, by saving the two Hires locations of the pixels of the triangle in the same line of the screen (segment), you need to swap the two values to put the lower in the first position. Then, when the triangle is drawn, just calulate the way to fill the segments with the gaps between these saved adresses, no spaces, just fill with missing pixels, a few spaces, fill with with full bytes, more, use color attributes to avoid the processor overload.
I'm not sure to understand well the suggestion. and that's a pity because it sounds very interesting speaking of avoiding calculation.

Is it clear that, between 2 consecutive frame render, points P0, P1, P2 may have change position on screen and then the Hires locations of corresponding pixel in graphic memory ?

The algorithm that I am trying to design is the one that is supposed to calculate the way to fill the segments between points of the triangle in the same line.

It has to be noted that an horizontal line on screen does not necessarily correspond to an horizontal line in the image.
So missing pixels might not be contiguous in the memory containing the image.
I'm not even sure they are equally spaced.
That's maybe why I don't understand your idea ..and perhaps I need to work an a concrete example to better figure out this principle of saved adresses and how to run through the gaps you're talking about.

In the end, the goal is to find a way to correlate addresses of graphic memory with addresses of pixel in the image .. so any idea that allows to save calculations deserves to be explored.

I'm going to work on a real example in order to see if I understand well your idea.
User avatar
Dbug
Site Admin
Posts: 4444
Joined: Fri Jan 06, 2006 10:00 pm
Location: Oslo, Norway
Contact:

Re: Image mapping : Peer review needed

Post by Dbug »

I would suggest by testing with something simple and obvious: A single quad that rotates around with just a big yellow smiley image on it :)

Could even start by first rotating around one axis (either vertical or horizontal), just to see how it looks and how fast it goes, then if that seems doable, go farther.

The advantage of the big smiley is that this would still be recognizable even in TEXT mode, which is optimal for the performance, because when we go down to the sub-character level, you enter the world of pain (bit manipulation), which is generally when people start using C2P (Chunky To Planar) algorithms to convert an easy to use "byte buffer" and a fast conversion routine that can batch convert.

Good luck!
User avatar
Badger
Pilot Officer
Posts: 84
Joined: Sat Sep 22, 2018 10:04 am
Location: Wigan, England

Re: Image mapping : Peer review needed

Post by Badger »

I maybe thinking about this all wrong, or someone may have said the same thing in a different way, but couldnt you use your calculations to generate a set of look-up tables to create offset to map the image onto the face of the object.

I'll see if I can explain it in a logical way that makes sense to me and other people :)

Bear with me, I'm kind of thinking outloud.

You have a 30x30 pixel cube that can rotate about it's x-axis on which you want to project a 20x20 pixel image onto 1 or more of it faces.
Lets say for now the cube is static in its Z axis position so the 30X30 remains constant.

You know the x and y co-ordinate of the top-left of one of the faces (CFx,CFy) and the rotation of that face (CFr). Again for now we will assume only rotating about 1 axis.

If we start at rotation 0 (CFr=0) the face of the cube is a square and we can map the image directly 1 for 1 with the required offset from CFx,CFy.

Rotating the cube will change CFx and CFy but these actually dont matter since we are mapping based on a relative offset from these numbers.

The cube face will then become trapezoidal, for each X and Y co-ordinate bounded by trapezoid, you would lookup the corresponding pixel in the image to be mapped based on the rotation of the face. So at say a 45 degree rotation you would still assume the size of the face was 30X30 (ie the original square) but would only map every-other pixel in the mapping image. That would in effect display the compressed image onto the face (or not if my logic is rubbish :D ) . S0 for each value of rotation 1-179 degrees you would lookup what pixel to map (0 and 180 are effectively hidden so could skip these, and 181-359 are hidden so can skip these too).

So for rotation of a cube about 2-axis you would need 2 lookup tables to map the X and Y co-ords to the face.

For movement in apparenty distance (ie cube size) you would need a 3rd table to effectively describe what to do with the extra pixel (if the cube became larger), and I guess for each step of so-many pixel you would repeat the line of the image based on the other 2 tables or skip lines in the image if the cube became smaller.

I don't know if this logic is correct, but would it be easiest to start with the mapped image to be its maximum size, based on what the maximum size of the face could possibly be and then reduce the image (by skipping lines) based on the Z axis distance? This may mean storing large images in memory, but I think would be easiest to do and possibly look better since compressed images look better than expanded ones.

So to summarise, you would have your image in memory , the origin point of the face you wish to map to, the rotation of that face and the size (or distance from viewport) of the face. So you would use 3 lookup tables to calculate which pixel from the image you would need for each pixel on the face of the object. The tables could be calculated in a initial routine using the maths stated in the original post or externally calculated and hard coced.

Knowing my luck, all these things would take up 47.9k of memory just to project a smiley face onto a cube, but thats my logic :D


I do hope that made some kind of sense, and I havent the skills to do any kind of coding on this.

Please feel free to ignore, like I said I was typing while thinking.
flag_uk Amateurs built the Ark, Professionals built the Titanic.
User avatar
jbperin
Flight Lieutenant
Posts: 480
Joined: Wed Nov 06, 2019 11:00 am
Location: Valence, France

Re: Image mapping : Peer review needed

Post by jbperin »

Dbug wrote: Fri Aug 28, 2020 6:34 pm when we go down to the sub-character level, you enter the world of pain (bit manipulation), which is generally when people start using C2P (Chunky To Planar) algorithms to convert an easy to use "byte buffer" and a fast conversion routine that can batch convert.

Good luck!
OMG F**K !! You're dawn right man ..
I think you've just pointed out the big thing I had totally missed.
The bit manipulation stuff looks far more complicated than I thought.
I think my mind had put this stuff under the carpet.

Sounds like a game over message :-(

Badger wrote: Sat Aug 29, 2020 9:58 am
Knowing my luck, all these things would take up 47.9k of memory just to project a smiley face onto a cube, but thats my logic :D

I do hope that made some kind of sense, and I havent the skills to do any kind of coding on this.

Please feel free to ignore, like I said I was typing while thinking.
Well, I think it makes sense. Your message gives me some nice ideas.
I tried to design an algo which could be ran on real time on Oric.
But if it fails to be fast enough, I may decide to use it to precalculate thing indexed by the rotation as you suggest.
It sounds very smart.
I'm first going to see if I can use my calculation in real time.
Because I feel it is a more flexible and adaptable solution.
But if it turns out to be too slow, I will for sure get back to your proposal.
Thanks a lot.
User avatar
Chema
Game master
Posts: 3014
Joined: Tue Jan 17, 2006 10:55 am
Location: Gijón, SPAIN
Contact:

Re: Image mapping : Peer review needed

Post by Chema »

A bit off topic, I know, but has anyone ever played with raycasting on an Oric? I have the feeling that it is way beyond its capabilities, but who knows...

I mean having something faster than 1 fps on, say, 1/2 the hires area..
User avatar
Dbug
Site Admin
Posts: 4444
Joined: Fri Jan 06, 2006 10:00 pm
Location: Oslo, Norway
Contact:

Re: Image mapping : Peer review needed

Post by Dbug »

Chema wrote: Sat Aug 29, 2020 6:41 pm A bit off topic, I know, but has anyone ever played with raycasting on an Oric? I have the feeling that it is way beyond its capabilities, but who knows...

I mean having something faster than 1 fps on, say, 1/2 the hires area..
I did some tests back in 2001, for what's worth I added it on the SVN :)
https://osdn.net/projects/oricsdk/scm/s ... s/Raycast/
User avatar
jbperin
Flight Lieutenant
Posts: 480
Joined: Wed Nov 06, 2019 11:00 am
Location: Valence, France

Re: Image mapping : Peer review needed

Post by jbperin »

OK, I made a fully working prototype in python of the algorithm.
I made the equations for the second triangle and I made the arithmetic part that make the relation between point within triangle and pixel addresses.
I tested on Orichir (thanks Goyo for this wonderful tool) and now I'm confident with the fact that:
- my equations are OK
- the principle works with different point and triangle orientation

The problem is that, for now, my prototype implements the algorithm with floating point values.
So the next step is to make a version that works with integers and/or fixed point values in order to prepare a C implementation.

I already have the arctangent and the euclidean norm working on integers.
I saw some nice implementation of fixed point sine and cosine in code of Chema and Dbug.
So I should be able to do the job by mixing the whole stuff.
Dbug wrote: Sat Aug 29, 2020 7:00 pm I did some tests back in 2001, for what's worth I added it on the SVN :)
https://osdn.net/projects/oricsdk/scm/s ... s/Raycast/
This project should never have been stopped.
I recently watched the "High Score" series on Netflix and it appears clearly that the people was have used this technology entered the history of game making.
I militate for you do a text version of this stuff .. which should go 5 times faster.
And if you use the norm i made for gloric (to compute the distance), you could use this distance to adapt the character => stunning effect in perspective .. trust me ..

Look what it gives when the distance is used to set the darkness of the vertical line:

Image

isn't that killing ?
User avatar
Dbug
Site Admin
Posts: 4444
Joined: Fri Jan 06, 2006 10:00 pm
Location: Oslo, Norway
Contact:

Re: Image mapping : Peer review needed

Post by Dbug »

Feel free to do it, would be nice to have a proper Wolfenstein/Midi Maze like game on the Oric :)

I already have too many started projects that I should really finish before.
User avatar
ibisum
Wing Commander
Posts: 1646
Joined: Fri Apr 03, 2009 8:56 am
Location: Vienna, Austria
Contact:

Re: Image mapping : Peer review needed

Post by ibisum »

This is super exciting! :)
User avatar
kenneth
Squad Leader
Posts: 515
Joined: Fri Nov 26, 2010 9:11 pm
Location: France PdD
Contact:

Re: Image mapping : Peer review needed

Post by kenneth »

Interesting to translate it in ASM, without colors it could be runnable on Oric. 8)
ThomH
Flying Officer
Posts: 238
Joined: Thu Oct 13, 2016 9:55 pm

Re: Image mapping : Peer review needed

Post by ThomH »

I can't speak closely as to the original sources, but the classic serial implementation for affine texturing a polygon is just two adds per pixel. In pseudo-C:

Code: Select all

for(x = start ... end) {
    pixel[x][y] = pixel[texture_x][texture_y];
    texture_x += offset_x;
    texture_y += offset_y;
}
... with appropriate modifications to fixed point. You can set up offset_x/y at the start of each scan line either by tracking them along each edge and doing a divide, or by plane or barycentric calculations.

If you want a proper perspective fill then in principle you need to interpolate x/z, y/z and 1/z (which are all linear in screen space) and then divide per pixel — e.g. x/z over 1/z — to get regular texture lookup x. But a pretty common optimisation of the time was to do the divide only at certain intervals and to interpolate linearly between those points, see e.g. Descent or software-powered Quake.

For a ray caster there are a few further differences: (i) you get a fixed texture x via trigonometry; (ii) you always start at y/z = 0 so no divide there; and (iii) you don't need to do a divide to get 1/z because it's just another version of distance-to-wall. Greater distance = bigger per-pixel step.

Corollary: if you're doing a texture fill, you don't actually need to do a divide at all once you've got distance-to-wall. Just start each slice in the middle and work upwards and downwards, incrementing your texture lookup by a suitable multiple of distance, until you go out of range.

So a textured ray caster is usually one add, one shift, one lookup, one store per pixel, and need never do a divide.

That said, ray casting is not especially efficient. For most scenes it's a lot quicker to do forward 2d geometry and keep a 1d depth buffer. Which also neatly resolves map storage memory pressure on 8-bit machines. Each Wolfenstein map in the original occupies 64kb.
User avatar
Dbug
Site Admin
Posts: 4444
Joined: Fri Jan 06, 2006 10:00 pm
Location: Oslo, Norway
Contact:

Re: Image mapping : Peer review needed

Post by Dbug »

It's more or less what I'm doing in the rotozoom code in my demos :)
- add offset_x/y to move along one axis
- add offset_y/x to move along the other axis
ThomH
Flying Officer
Posts: 238
Joined: Thu Oct 13, 2016 9:55 pm

Re: Image mapping : Peer review needed

Post by ThomH »

Dbug wrote: Tue Sep 22, 2020 7:23 am It's more or less what I'm doing in the rotozoom code in my demos :)
- add offset_x/y to move along one axis
- add offset_y/x to move along the other axis
Yes! It should be exactly the same thing for a PS1-style linear mapping. And, of course, Wolfenstein, Doom et al are always linear per pixel; the application of perspective is only per column of the wall or a line of the floor.

The hassle on a 6502 is more the texture indexing and pixel plotting, I guess. Which you've clearly already handled. I rotozoomed on an Atari Lynx once, but that has a blitter so I just assembled each scan on the stack and then blitted it into place; since it's a stretching blitter that also handled pixel-to-byte packing, and when applied to a Mode 7-style floor even allowed more distant areas to be calculated at a lower resolution. But the main win was not having to do 16-bit address calculation.
Post Reply