rel3.1


Date: Mon, 24 Oct 88 8:43:42 EDT From: Paul Tanenbaum <pjt@BRL.MIL> To: Phil Dykstra <phil@BRL.MIL> cc: cad@BRL.MIL Subject: Re: negative angle or positive

This subject often generates considerable confusion when discussed via email... things that could be explained in 5 seconds if people were talking face to face with a toy tank as a conversation piece.

Let me ask this question to try to clarify. Given a model of a tank, where X+ points from back to front, Z+ points up, and Y+ points left (i.e., Z+ is the cross product X+ x Y+), what would RT (or any other software) call this view:

^ Z+ | | oooooooooooo oooooooooooooooo 0 X+ <----- 0oooooooooo0 oooooo0 0oooooooooo 0 0 0oooooooooooooooooooooooooooooo0 \O____O____O____O____O___O___O/

We in Vuln./Leth. call it 90 (or -270) degrees. MGED does the same thing, at least it always has. Should I interpret your message to say that RT used to be the lone wolf that behaved otherwise, but that it now agrees with everybody else in the Universe in calling this 90 azimuth? Does that also mean that 90 elevation will look down on the top of the tank, and not up at the bottom?

+++paul


Date: Tue, 25 Oct 88 14:44:34 EDT From: John H Suckling <john@BRL.MIL> To: phil@BRL.MIL cc: cad@BRL.MIL Subject: [Paul Tanenbaum: Re: negative angle or positive]

So what are the answers to Paul's well formulated Questions? Does RT NOW agree with the old time vulnerability analysts? If the answer to the azimuth question is yes, doesn't that disagree with the "compass readers" who would call an angle positive when it goes from North in the direction of East?

N(0 or 360) ^ | | | W(270)<-------------------------------->E(90) | | | v S(180)

----- Forwarded message # 1:

Date: Mon, 24 Oct 88 8:43:42 EDT From: Paul Tanenbaum <pjt@BRL.MIL> To: Phil Dykstra <phil@BRL.MIL> cc: cad@BRL.MIL Subject: Re: negative angle or positive

This subject often generates considerable confusion when discussed via email... things that could be explained in 5 seconds if people were talking face to face with a toy tank as a conversation piece.

Let me ask this question to try to clarify. Given a model of a tank, where X+ points from back to front, Z+ points up, and Y+ points left (i.e., Z+ is the cross product X+ x Y+), what would RT (or any other software) call this view:

^ Z+ | | oooooooooooo oooooooooooooooo 0 X+ <----- 0oooooooooo0 oooooo0 0oooooooooo 0 0 0oooooooooooooooooooooooooooooo0 \O____O____O____O____O___O___O/

We in Vuln./Leth. call it 90 (or -270) degrees. MGED does the same thing, at least it always has. Should I interpret your message to say that RT used to be the lone wolf that behaved otherwise, but that it now agrees with everybody else in the Universe in calling this 90 azimuth? Does that also mean that 90 elevation will look down on the top of the tank, and not up at the bottom?

+++paul

----- End of forwarded messages


Date: Tue, 25 Oct 88 15:46:35 EDT From: Phil Dykstra <phil@BRL.MIL> To: John H Suckling <john@BRL.MIL> cc: cad@BRL.MIL Subject: Re: [Paul Tanenbaum: Re: negative angle or positive]

Uh oh...

> So what are the answers to Paul's well formulated Questions?

From Paul's note:

> We in Vuln./Leth. call it 90 (or -270) degrees. MGED does the same thing, > at least it always has. Should I interpret your message to say that RT used > to be the lone wolf that behaved otherwise, but that it now agrees with > everybody else in the Universe in calling this 90 azimuth?

Yes.

> Does that also mean that 90 elevation will look down on the top of the > tank, and not up at the bottom?

Yes.

And at the risk of continuing this discussion:

> If the answer to the azimuth question is yes, doesn't that disagree with > the "compass readers" who would call an angle positive when it goes from > North in the direction of East?

That only depends on how you map compass directions to coordinate axis. If N is X and E it Y than toward E would be a negative Z rotation (unless you use a left handed coordinate system or let your Z axis point into the ground). But I note that astronomers when dealing with azimuth elevation usually use an SEZ (South, East, Zenith) coordinate system (where X is S, Y is E). This way Z goes "up" and toward E is a positive Z rotation in a right handed system FROM THE SOUTH. You may call it cheating, but hey, whoever came up with "clockwise" got it backward [probably the combination of reading from left to right and living in the northern hemisphere].

- Phil

ps: In BRL-CAD software we *always* use Right Handed coordinates.


Date: Tue, 25 Oct 88 17:54:57 EDT From: Doug Gwyn (VLD/VMB) <gwyn@BRL.MIL> To: Phil Dykstra <phil@BRL.MIL> cc: John H Suckling <john@BRL.MIL>, cad@BRL.MIL Subject: Re: [Paul Tanenbaum: Re: negative angle or positive]

Actually, astronomers use "right ascension" and "declination".

Geophysicists often place the X axis eastward, Y northward, and Z down.

Mathematical convention (generally used by physicists) is for angles to be taken as positive in the so-called "counterclockwise" direction (whether it is counterclockwise or not depends on yet another convention; think about it).

The bottom line seems to be that one should choose a consistent nomenclature for the subject area then stick with it.


Date: Fri, 21 Oct 88 17:35:58 EDT From: Phil Dykstra <phil@BRL.MIL> To: Carl Moore <cmoore@BRL.MIL> cc: cad@BRL.MIL Subject: Re: negative angle or positive

RT has changed in 3.0 to use the more common sign convention for azimuth and elevation. The interface to MGED never did pass azimuth and elevation directly (it passed a viewing transformation matrix) so it did not have to change.

RT differed from the rest-of-the-world not in what constituted a positive azimuth and elevation, but rather in what was being rotated. RT was rotating the *model* according the the given values, while most people think of positioning the *eye* (viewing or firing direction). At my urging RT was changed to be the same as everybody else. So, the sign inversion goes away in 3.0.

- Phil


Date: Wed, 2 Nov 88 3:55:18 EST From: Phil Dykstra <phil@BRL.MIL> To: Dan Christensen <att!chinet!mcdchg!clyde!watmath!watcgl!jdchrist@ucbvax.berkeley.edu> cc: info-iris@BRL.MIL Subject: Re: videotaping from the iris

What we are presently using here for NTSC from an Iris is a Lyon-Lamb ENC-VI NTSC Encoder (cost ~4500). In my opinion, there are higher quality encoders on the market (RGB Technologies, Faroudja CTE-N), but this is one of the cheapest (and includes a Sync generator and Black/ ColorBars generator). In software, you need to set the Iris for 30Hz interlaced (which means that the main monitor becomes useless until you go back to 60Hz). You get the lower left hand corner of the screen (~640x480 pixels).

If you want the full IRIS screen to come out in NTSC you will need a "frame scan converter". These are more expensive (~25k). See e.g. Photron.

SGI also sells an RGB -> NTSC encoder board for the 4Ds. We bought one but sadly have not been able to use it. Why is because the SGI board outputs a fully positive video signal (i.e. blanking is around +0.3V) rather than a bipolar signal with blanking at 0VDC. While I haven't found anything in the RS170A spec that requires an absolute voltage level (it all looks to be AC coupled), much to my surprise our Sony BVU850 seems to REQUIRE blanking to be at 0VDC (or at the very least average picture level to be at 0VDC). Black levels get all messed up when recorded from the SGI board but come out okay from the ENC-VI. I don't know who's to "blame", if anyone, but its something to keep in mind. [nothing works the way it is supposed to]

- Phil <phil@brl.mil> uunet!brl!phil


Date: Mon, 2 Jan 89 18:50:52 EST From: Mike Muuss <mike@BRL.MIL> To: CAD@BRL.MIL Subject: Rel 3.0/SGI 3D 3.5 bug fix

On the BRL-CAD Release 3.0 tapes, there are three modules that need to be fixed for proper operation on an SGI 3-D series workstation with SGI Release 3.5. To be safe, these modules should also be modified for use on SGI Release 3.6 and beyond.

I regret that these problems were not discovered earlier. My thanks to Lee Butler of NASA Space Telescope and Sue Muuss of BRL/VLD/ASB who encountered these difficulties within hours of each other, and provided detailed bug reports.

The three modules that need to be fixed are: mged/dm-ir.c Add missing curley brackets mged/rtif.c Fix problem with pointer changing rt/worker.c Work-around SGI C Compiler problem


Date: Fri, 6 Jan 89 23:43:53 EST From: Doug Gwyn (VLD/VMB) <gwyn@BRL.MIL> To: Mike Muuss <mike@BRL.MIL> cc: CraySupport@BRL.MIL Subject: Re: XMP rt died again

I tried a small experiment on Patton. Their asin() function in -lm reports "BAD SCALAR ARGUMENT" only when it really is a domain error (so far as I was able to determine). Note that one of Gwyn's rules of reliable numerical programming is never to use any inverse trig function other than atan2(). There have been times when I've been tempted to remove asin(), acos(), and especially atan() from the standard library to enforce this rule (but of course I haven't done so).

Almost certainly you're feeding 1.000...000x or -1.000...000x to asin() (where "x" stands for some nonzero digits). The nearly-but- not-quite 1.0 value normally results from floating-point imprecision when angles near some integral multiple of 90 degrees are involved. Unfortunately, this is more the rule than the exception in computer graphics. You need to use a more robust algorithm.


Date: Fri, 6 Jan 89 23:22:02 EST From: Mike Muuss <mike@BRL.MIL> To: CAD@BRL.MIL Subject: vgr updated

I have updated libfb/if_adage.c for NTSC operation: ik2v NTSC with internal sync ik2e NTSC with external sync I re-linked rfbd, and all the tools in the UTIL directory (pix*, rle* fb*, etc) and installed them. -Mike


Date: 6 Nov 88 10:11:25 GMT From: Charles Poynton <vector!poynton@sun.com> Subject: Luminance from RGB (was "intensity" from RGB) Sender: xpert-request@athena.mit.edu To: xpert@athena.mit.edu

In Comp.windows.x article <8811011523.AA02242@LYRE.MIT.EDU>, Ralph R. Swick <swick@ATHENA.MIT.EDU> comments:

> When converting RGB values to monochrome, the sample server(s) compute > an intensity value as (.39R + .5G + .11B) ...

(.39R + .5G + .11B) is apparently incorrect. This set could be a typographical error (from .29R + .6G + .11B ?), a liberal approximation, or perhaps an unusual phosphor set. Could someone enlighten me on this?

In followup article <8811042303.AA21505@dawn.steinmetz.GE.COM>, Dick St.Peters <stpeters@dawn.UUCP> makes the statement:

> I'd like to suggest that (.39R + .5G + .11B) is not a good choice for > "intensity" in the realm of computer graphics. ... > > A better choice in computer graphics is to equally weight the colors: > ((R+G+B)/3.0). Let white be white.

Equal weighting of the primaries is NOT the right thing to do, unless the viewers of your images are members of some species that has uniform response across the visible spectrum, unlike homo sapiens.

Humans see 1 watt of green light energy as being somewhat brighter than 1 watt of red, and very much brighter than 1 watt of blue. The science of colourimetry began to flourish in 1931, when the CIE standardized a statistical entity called the "standard observer". This includes a standard spectral luminance response defined numerically as a function of wavelength. It is from this data that the factors which are used in colour television are derived: .587 for green, .299 for red, and .114 for blue.

The particular factors depend on the wavelengths or chromaticities that you call red, green, and blue: there is wide disparity in these choices. For computer graphics and television, the luminance factors depend on the chromaticity coordinates of the phosphors of your CRT. There are compromises in the choice of phosphor primaries, but it turns out that the NTSC did a spectacularly good job of selecting primaries. The luminance coefficients 0.299 for red, 0.114 for blue, and 0.587 for green are unquestionably the best values to use, unless you know your phoshphors intimately.

The second article continues, > The formula is from > the (1954) NTSC standard for compatible color TV, and it has built > into it a lot of compromises to accommodate old technology and > problems inherent in the analog transmission of composite color > television.

Contrary to this assertion, the ONLY compromise in NTSC which impacts the luminance equation is the choice of reference phosphor chromaticities, and a choice of phosphors MUST be made for any system which transmits colour in RGB. Just because it's old (1954) doesn't mean we should throw it away.

Aside from this, the discussion of television coding which follows is substantially correct, except that modulation onto an RF carrier for transmission involves no inherent compromises beyond those already made in formation of baseband NTSC. (Receivers frequently make their own compromises, but these are not inherent.)

For those interested, I attach an alternate description of television coding.

Charles Poynton "No quote. poynton@sun.com No disclaimer." (415)336-7846

-----

GAMMA CORRECTION

A picture tube (CRT) produces a light output which is proportional to its input voltage raised to approximately the 2.5-th power. Rather than requiring circuitry to implement the 2.5-th root function to compensate for this be implemented at every receiver, the "gamma correction" is performed on the R, G, and B primaries at the camera to form signals denoted R', G', and B'.

YUV REPRESENTATION (3 wires)

Studio equipment typically processes colour signals in three components YUV, which are easily derived from RGB. The Y channel contains the luminance (black-and-white) content of the image, and is computed as:

Y' = 0.299 R' + 0.587 G' + 0.114 B'

"Colour difference" signals U and V are scaled versions of B'-Y' and R'-Y' respectively; these vanish for monochrome (grey) signals. The human visual system has much less acuity for spatial variation of colour than for luminance, and the advantage of U and V components is that each can be conveyed with substantially less bandwidth than luminance, R or G or B. In analog YUV studio systems, U and V each have a bandwidth of 1.5 MHz, compared to between 4.2 MHz and 5.5 MHz for luminance. In digital systems, U and V are each horizontally subsampled by a factor of two (i.e. conveyed at half the rate of the luminance signal).

Y/C REPRESENTATION (2 wires)

U and V can be combined easily into a "chroma" signal which is conveyed as modulation of a continuous 3.58 MHz sine-wave subcarrier. Subcarrier phase is decoded with reference to a sample or "burst" of the 3.58 MHz continuous-wave subcarrier which is transmitted during the horizontal blanking interval. The phase of the chroma signal conveys a quantity related to hue, and its amplitude conveys a quantity related to colour saturation (purity). The "S" connectors of S-VHS and ED-Beta equipment simply carry Y and C on separate wires. This coding is easily decoded without artifacts. Current S-VHS equipment conveys chroma with severely limited bandwidth, about 300 kHz (which is just 16 cycles of U or V per picture width). Consumer VCR equipment has always recorded the luminance and chroma components separately on tape, but only since the introduction of the S-connector in S-VHS and ED-Beta equipment has the consumer been able to take advantage of this capability.

NTSC REPRESENTATION (1 wire)

The NTSC system mixes Y and C together and conveys the result on one piece of wire. The result of this addition operation is not theoretically reversible: the process of separating luminance and colour often confuses one for the other. Cross-colour artifacts result from luminance patterns which happen to generate signals near the 3.58 MHz colour subcarrier. Such information may be decoded as swirling colour rainbows. Cross-luminance artifacts result if modulated colour information is incorrectly decoded as crawling or hanging luminance dots. It is these artifacts which can be avoided by using the S-connector interface. In general, once the NTSC footprint is impressed on a signal, it persists even if subsequent processing is performed in RGB or YUV components.

Encoded NTSC signals can be sampled into a stream of 8-bit bytes. Such "composite digital" systems have the advantage of using slightly less memory than component systems, at the expense of the dreaded NTSC artifacts. Manipulation of such composite signals to perform operations such as shrinking the picture is difficult or impossible, because if the colour subcarrier frequency is altered the colour information in the signal is destroyed. Therefore, these operations are performed in the component domain.

FREQUENCY INTERLEAVING

The NTSC colour subcarrier frequency is chosen to be exactly 455/2 times the line rate of 9/.572 kHz. The fact that the subcarrier frequency is an odd multiple of half the line rate causes colour information to be interleaved with the luminance spectrum: if a portion of a coloured region has a positive-going modulated chroma component on one scan line, then on the next line chroma will go negative. This property allows the use of a "comb filter" to separate luminance and chroma. The signal is delayed by one total line time, in order that two vertically adjacent picture elements be available to the electronics at the same instant in time. Forming the sum of these two elements will produces luminance, and forming their difference produces the modulated chroma. This feature results in greatly improved luma/chroma separation compared to a 3.58 MHz "trap" filter. However, a comb filter assumes a fair degree of vertical correlation in the picture, and this assumption does not hold for pictures with great vertical detail.


Date: Wed, 11 Jan 89 12:51:04 EST From: "Gary S. Moss" (VLD/VMB) <moss@BRL.MIL> To: "Daniel C. Dender" <dender@BRL.MIL> cc: cad@BRL.MIL, dender@BRL.MIL Subject: Re: Bug Report on 3.0

< 3) Running LGT with pop-up menus, no keyboard input is accepted unless < prompted for as the result of a pop-up menu selection. (Running without < pop-up menus in favor of the curses style menus gives correct behavior.) < < Dan Dender Dan, Due to subtle changes in the behavior of the graphics library on the IRIS 4D 2.x, I opted to drop the option of entering single-letter commands via the console keyboard on the IRIS *when* mouse interaction is selected. It was simply not worth supporting this when keyboard input is available by selecting the curses popup menus instead, and then quitting the menu system to get the keyboard prompt.

On IRIX 4D-3.1, the adoption of the NeWS system for windowing has added more confusion since 'lgt' does not know how to detect for the presence of or startup the 4Sight graphics system (I only installed the 4Sight software on my 4D last week). Possibly the next release of 'lgt' will support the PostScript menus, but not unless I get time away from other projects.

I would recommend using the curses-style menus over the SGI ones; they don't have the problem of missing the desired entry with the mouse, they have help built in (by typing 'h'), and they remember what the last selection was for a given menu or sub-menu.

-moss


From: efo <efo@pixar.uucp> Newsgroups: comp.graphics Subject: Re: 3D Pixel Transforms Date: 10 Jan 89 21:42:42 GMT Sender: news@pixar.uucp Keywords: Pixel arrays, 3D transformations To: brl-comp-graphics@smoke.brl.mil

In article <17963@dhw68k.cts.com> stein@dhw68k.cts.com (Rick Stein) writes: > >I'm curious to know about how one performs rotations on pixel >data. >Any suggestions/references for pixel manipulations (ala Pixar)?

Yes. The seminal paper on this topic is: E. Catmull and A. R. Smith, "3-D Transformations of Images in Scanline Order," Computer Graphics 14(3) (SIGGRAPH '80 Proceedings) July 1980, pp 278-285

And the other answer is, yes, you can decompose a rotation into shears which can be done quite cheaply; see the above.

Eben Ostby Pixar


Date: Thu, 12 Jan 89 13:26:45 PST From: pom%and.s1.gov@mordor.s1.gov To: cad@BRL.MIL

SUBJECT Standards (was : How to model coil spring) sent TO : cad@brl.mil

There is a group on standards (comp.std.internat) and so I do not want start an 'std' debate here - but perhaps a short endorsement (support/vote) of Phil's view and a recollection of as what IGES started and why it still is 'initial' is not out of place: RE: As for IGES import/export: This would be sort of nice, but I don't think anyone here has it on their "to do" list. When I examined interchange formats ... my feeling was to skip IGES and wait for ISO STEP. (and thus in the future go the ISO route).] Phil Dykstra <phil@brl.mil>

and Re: The bottom line is that I predict that ISO STEP (or PDES) models useful to us won't be available from U.S. defense contractors until AT LEAST five years and we'd best put our money, at this stage, into IGES. And some (if not most) of the tools we'd develop for IGES should be usable for developing ISO STEP translators Earl Weaver <earl@brl.mil> IGES started as a reaction to an overly complicated and too slowly evolving draft of an ANSI standard (which looked rather like a badly written textbook on topology & differential geometry). IGES was a fast, engineering common sense, quick and dirty 'fix, for now' , not intented as a base of anything. Then 'as a compromise' the two drafts were merged and so on ...

Fact remains that IGES has insufficient theoretical basis and to build more above it's present function would be mostly a waste (I agree with Earl: practical and justifiable - but still waste) of resources. What is needed is a formal language (probably B-N specified) with hirrearchical structure (as e.g. PHIGS) which is nD, super-set of graphics and drafting, and of inputs to Finite Elements models with ability to describe materials, etc etc. It is not simple or easy - but it needs to be done. One reason for present mess was our inability to make 'reasonable run' goals. {IGES was 'short run', the previous draft was 'too long run' and so we are in the same dilema again!). It is possible to set a 'two years goal' and evolve consensus on something a draft, which is workable. Alternative is to always work with improvisations and fixes. Since we (US) missed our ANSI opportunity already - we should now contribute our resources to the ISO work (and fort short time - just improvise. That may include an CAD interface - but still quick and dirty;) We'd best put most of 'our money' into developing a draft of a standard which has potential for growth and for meeting long term needs. pom (Peter Mikes, LLNL)


Date: Thu, 19 Jan 89 19:53:10 EST From: Mike Muuss <mike@BRL.MIL> To: ACST@BRL.MIL

I have gone through all the CAD sources, and taken care of the #endif problems.

For ANSI C, made tokens after #endif into comments. -M


Date: 7 Nov 88 04:40:12 GMT From: Dave Martindale <clyde!watmath!onfcanim!dave@bellcore.com> Organization: National Film Board / Office national du film, Montreal Subject: Re: videotaping from the iris Sender: info-iris-request@BRL.MIL To: info-iris@BRL.MIL

In article <8811020355.aa14353@SPARK.BRL.MIL> phil@BRL.MIL (Phil Dykstra) writes: > >SGI also sells an RGB -> NTSC encoder board for the 4Ds. We bought >one but sadly have not been able to use it. Why is because the SGI >board outputs a fully positive video signal (i.e. blanking is around >+0.3V) rather than a bipolar signal with blanking at 0VDC. While >I haven't found anything in the RS170A spec that requires an absolute >voltage level (it all looks to be AC coupled), much to my surprise our >Sony BVU850 seems to REQUIRE blanking to be at 0VDC (or at the very >least average picture level to be at 0VDC). Black levels get all >messed up when recorded from the SGI board but come out okay from the >ENC-VI.

RS-170 video can be capacitively coupled (and often is at the input of a piece of equipment). If the equipment depends on blanking being at a specific DC level, it will have a "DC restorer" circuit that clamps a reference part of the signal to ground - usually the blanking level found in the "back porch" after the sync pulse, though the tip of the sync pulse could conceivably be used too.

If your Sony lacks input coupling capacitors, its DC restorer may be fighting the SGI board output - capacitively coupling the signal should fix this. If the Sony simply lacks its own DC restorer and really does depend on the absolute voltage levels in its input (unlikely), then just pass the signal through a video distribution amplifier which does DC restoration (these are readily available).

If the Sony has its own DC restoration and a capacitively coupled input, then there has to be something "out-of-spec" in the video signal that you are feeding it that causes it to be confused - incorrect sync or video amplitude, incorrect sync pulse width, or the like.


From: steinmetz!dawn!stpeters@uunet.uu.net Date: Fri, 4 Nov 88 18:03:46 EST To: xpert@athena.mit.edu Cc: stpeters@dawn.steinmetz Subject: "intensity" from RGB

> When converting RGB values to monochrome, the sample server(s) compute > an intensity value as (.39R + .5G + .11B) and assign black if this > value is less than 50%, white otherwise. Since green is normally > defined as (0%, 100%, 0%), it becomes white and (0%, 39%, 39%) becomes > black. Other algorithms may be substituted...

I'd like to suggest that (.39R + .5G + .11B) is not a good choice for "intensity" in the realm of computer graphics. The formula is from the (1954) NTSC standard for compatible color TV, and it has built into it a lot of compromises to accommodate old technology and problems inherent in the analog transmission of composite color television.

A better choice in computer graphics is to equally weight the colors: ((R+G+B)/3.0). Let white be white.

For those interested:

Because color space is three dimensional (example coordinates: RGB), composite TV must use three different forms of modulation to transmit three independent information streams on a single carrier. This is not even possible in general.

However, there were clever engineers even back then. Because each TV scanline differs little from its predecessor, the TV signal is roughly periodic, which in turn means its spectrum is as well. A periodic spectrum is a series of spikes, so before color, the spectrum of a TV signal when looked at in detail looked like a picket fence.

It is possible to modulate a signal to carry TWO independent streams of information, using quadrature modulation, so the color guys used the two chrominance signals to modulate a low-frequency signal and then used that entire modulated signal to modulate the basic TV carrier in a way so that the chrominance pickets were in the gaps between the luminance pickets. Voila, 3 signals on one TV carrier.

Well, sort of. The pickets have tails and can get pretty wide if the scene changes rapidly, so there is always some crosstalk and sometimes a lot of it. Things get even hairier when you modulate this onto RF.

Further, not all 3 signals were created equal. The original signal is what a B&W TV sees, and it still had to look like a B&W TV signal, so this "luminance" signal had to be some combination of RGB that was roughly the "intensity", while what was left over had to be divied up into two "chrominance" signals. Various constraints make the bandwidth for the luminance signal substantially more than the *sum* of the bandwidths for the two chrominance signals - and the latter differ by roughly a factor of two. Just from an information content point of view, it matters what RGB combinations are used. NTSC took all this into account, as well as human eye characteristics.

But they had more to consider. A B&W TV demodulates the luminance and uses the resulting signal to control a CRT gun. However, the CRT brightness is not linear relative to the luminance signal - in fact, it is roughly proportional to the *square* of the signal. Applying the principle that receivers had to be cheap but transmitters could be expensive, B&W television applied a correction for this at the transmitter. Color TV had to be compatible both ways - not to mention a need to be inexpensive, so color TV's as well responded to the square (more or less) of the luminance, and color transmitters had to apply the same correction ("gamma" correction).

In other words, after the transmitting end divides RGB up into nice linear combinations, it applies a non-linear transformation. The receiver then demodulates the resulting signals, uses linear recombination to reconstruct "RGB", and applies them to a CRT that squares them.

The NTSC had to find RGB combinations for luminance and chrominance that let B&W sets receive color transmissions, color sets receive B&W transmissions, and color sets receive color transmissions, all within the limits of 1954 technology and the constraints of analog RF TV. (We didn't even get to the RF issues.) They did a marvelous job, but their formula for intensity is a severe compromise.

-- Dick St.Peters GE Corporate R&D, Schenectady, NY stpeters@ge-crd.arpa uunet!steinmetz!stpeters


Date: 18 Nov 88 04:48:40 GMT From: Chandlee Harrell <sgi!chandlee%alpine.SGI.COM@ucbvax.berkeley.edu> Organization: Silicon Graphics, Inc., Mountain View, CA Subject: Re: videotaping from the iris Sender: info-iris-request@BRL.MIL To: info-iris@BRL.MIL

Iris 3000s are shipped with two video options. The two defaults provide video timings for 1280 by 1024 60hz monitors and 640 by 480 RS170/NTSC monitors. A customer can request that the RS170 option be replaced with either PAL/SECAM timings or for a 1280 by 1024 interlaced 30hz monitor.

All Iris 4D systems ship with all four of the above timing options.

When in RS170 mode, the Iris outputs the three components of RGB in the correct RS170 timings, un-encoded. The bottom leftmost rectangle of 640 by 480 pixels is displayed on the full RS170 monitor screen.

An option board may be purchased from Silicon Graphics for Iris 4D systems which takes the three RGB outputs and color encodes them into a composite video signal. This signal is appropriate for connecting directly to any television, or to a VCR. The composite video output does not match up to broadcast quality; there is some minor difference that I am not familiar with. This should only be a concern for the media organizations, not for those of us creating video presentations. (Otherwise one buys a much more expensive broadcast quality color encoder.)

This option board is called (internally, at least) the CG2/3. It also provides the genlocking capability. This is the capability to sync up and overlay the video from two separate systems (while in either high res 60hz or NTSC modes).

So option one for video taping on an Iris 4D is to buy a CG2/3. The bottom left quarter of the screen may be recorded directly into any recording device that accepts NTSC composite video.

Option two allows you to video tape the full picture on your 1280 by 1024 high resolution display. Pixel averaging is done to reduce 4 pixels down to one giving the appropriate number of pixels (640 - 480) for NTSC. Pixel averaging provides better images over simply drawing the image into the bottom leftmost portion of your screen because a certain amount of anti-aliasing takes place in the pixel averaging. It also allows full screen video taping. Option two is available from vendors like RGB Technologies (don't know their pricing). The output from these systems is, again, standard composite video. Note: Silicon Graphics has some ongoing development that should help those desiring the pixel averaged approach.


Date: Sun, 27 Nov 88 19:34:03 EST From: Phil Dykstra <phil@BRL.MIL> To: butler@stsci.edu cc: cad@BRL.MIL Subject: Re: mged bugs on Sun

Lee,

To answer each of your remarks:

> translate an object (like a region) and you always get: > "Unable to-redraw evaluated things"

This happens when you do an accept edit from an object illuminate while there is an EVALUATED object being displayed. For reasons not entirely clear to me, MGED is unable to redraw the edited object if it had already been "evaluated", i.e. you need to re-evaluate it if desired after editing. Ordinarily one illuminates "e'd" objects, not "E'd" ones. I agree though that it should be possible to make MGED re-evaluate for you however.

> mged> SunPw_object: nvec 1344 clipped to 1024 > Bus error

Oops. The clipping wasn't quite right. In mged/dm-sun.c the change:

if( numvec++ > 1024 ) { OLD if( ++numvec > 1024 ) { NEW

fixes that.

> Can you really subtract one region from another? (problem creating > otasma subtracted region items seem to be added on)

I just tried this out. It appears that MGED still has problems with regions within regions. RT however does evaluate such subtractions correctly (I checked). MGED needs to be "modernized" here.

> mged> E fwhrs.r > proc_reg: Cannot draw solid type 16 (TOR)

I will investigate. Its some discrepancy between "e" and "E" drawing. Thanks for the bug report.

- Phil


Date: Thu, 15 Dec 88 4:06:13 EST From: Phil Dykstra <phil@BRL.MIL> To: mike@BRL.MIL Subject: rt

Mike,

FYI. I hacked in view module specific "set" variables tonight. We really should generalize this so that parse tables are chainable and can include side-effect functions and descriptive strings, etc. For now its a hack (mlib_parse2 that takes exactly two parse tables at once).

- Phil


Date: 11 Nov 88 17:42:10 GMT From: Malcolm Blanchard <pixar!mab@bloom-beacon.mit.edu> Organization: Pixar -- Marin County, California Subject: Re: Luminance from RGB To: xpert@athena.mit.edu

The discussion of luminance computations and the subsequent discussion of the meaning of white reminds me of an experience I had a few years ago when Pixar was a division of Lucasfilm and we were working on an effect for "Young Sherlock Holmes". Aesthetic decisions were being made by people sitting in front of color monitors. The digital images were transferred to film using three color lasers. The film was printed and then projected in a screening room. I decided that this was an great place to implement a what-you-see-is-what-you-get color system. And so I delved into the murky depths of colorimetry in the hope of developing a color-correction program that would produce the same color in the screening room that was measured on the color monitors. This a difficult problem (in fact, in its strictest sense, an impossible one, since the color gamuts of the two systems have mutually exclusive regions). I took into account the CIE coordinates of the monitor's phosphors, its color balance, the sensitivity of the color film to each of the lasers, cross-talk between the film layers, effects of film processing, the spectral characteristics of the print film's dye layers, and the spectral characteristics of a standard projector bulb. Several steps in this process are extremely non-linear, but I was able to achieve some good results by using some piece-wise linear approximations. I felt a great sense of success when I used a colormeter to confirm that the CIE coordinates on the silver screen did, indeed, closely match those on the tiny screen. We color corrected a few shots and showed them to the effects director. He's response was, "Why does this look so blue"? It turns out that when we look at a TV we're accustomed to a blue balance and when we're sitting in a theater we expect a yellow balance. The digital color correction was abandoned and the production relied on the film lab to produce an aesthetic balance. Thus proving to me that science may work, but computer graphics and film making are still largely a matter of art.


Date: Fri, 16 Dec 88 17:01:31 EST From: jim frost <adt!madd@bu-it.bu.edu> Message-Id: <8812162201.AA24738@adt.uucp> To: info-iris@BRL.MIL Subject: SGI's interesting idea of a "speedup"

Quoted from "Porting Applications to the IRIS-4D Family":

-- begin quote --

5.3 New Drawing Subroutines

Software reliease 4D1-3.0 introduced several new Graphics Library subroutines for drawing and pixel access. Silicon Graphics recommends converting old style routines to the new ones for three reasons:

* Your code will be more portable.

* On the GT and future products, the new subroutines will run up to 10 times faster than their old counterparts.

* The new subroutines simplify the Graphics Library and allow for future expansion.

In most cases, the convertion is simple -- just substitute the new subroutines for the old ones. Unfortunately, the new subroutines do not work in display lists, so if your code is based primarily on display lists, the solution is not so simple.

This table gives a comparison of old and new subroutines.

---------------------------------------------------------------------- Technique Old Subroutines New Subroutines ---------------------------------------------------------------------- draw connected move,draw,draw bgnline,v3f,v3f, line segments endline

draw closed move,draw,draw bgnclosedline,v3f,v3f, hollow polygons or poly endclosedline

draw filled pmv,pdr,pdr,pclos bgnpolygon,v3f,v3f, polygons polf or splf endpolygon

draw points pnt,pnt bgnpoint,v3f,v3f, endpoint

read pixels readpixels,readRGB rectread,lrectread

write pixels writepixels,writeRGB rectwrite,lrectwrite

draw triangular new bgntmesh,v3f,v3f, meshes endtmesh

color(vector) RGBcolor cpack or c3i

surface normal normal n3f

clear screen, clear,zclear czclear Z-buffer

create RGB RGBwritemask wmpack writemask ----------------------------------------------------------------------

-- end quote --

Interestingly, the 10x factor seems to be correct as one of our customers reported that our product "ran ten times slower" on the GT.

We happily followed the SGI guide to speed them up. At one point we changed all our readpixel() calls to rectread() calls, a non-trivial task because they don't have the same arguments at all. To our great surprise, the following was printed when the new call was made:

<rectread> is not implemented.

We were impressed at just how fast their new function didn't work, as I'm sure you can guess.

Curious, we investigated. Making use of "strings", we found that libgl_s.a contained the string "<%s> is not implemented.". Just how many functions might call whatever routine has that string is something that scares me.

Jim Frost Associative Design Technology (508) 366-9166 madd@bu-it.bu.edu


Date: Mon, 19 Dec 88 11:38:27 EST From: "Gary S. Moss" (VLD/VMB) <moss@BRL.MIL> To: phil@BRL.MIL cc: acst@BRL.MIL, moss@BRL.MIL, keith@BRL.MIL Subject: BRLCAD 3.0 MGED: bug report

Phil, I figure you probably are more familiar with the Sun driver for mged than most, so here goes; sorry if the details are a little vague, but in retrospect, I didn't ask enough questions:

Randy Smith of MSIC reports that on his Sun 3 (color) running 3.x that he gets a segmentation violation when he trys to display something. He had reported this problem about a month ago, but I told him to wait for BRLCAD 3.0 and try it again and he says it's still there. DBX indicates a bad pointer in the PIX_COLOR call (I'm pretty sure it's the one on line 394 of dm-sun.c):

if( sun_depth < 8 ) { sun_color( DM_WHITE ); color = PIX_COLOR(sun_cmap_color); } else { color = PIX_COLOR(mp->mt_dm_int); <----- culprit } The rest of his problems occur only on a Sun 4 with SunOS 4.0, with *no* graphics processor:

When editing a large group of objects, mged seems to hang forever; no display comes up. This is with plenty of memory (32 Megs); the same group works on his Sun 3 with 4 Megs.

When attaching the display, the menu comes up in black and white with no lettering in the boxes. The graphics display has color initially. Then, if he resizes the window, the text gets filled in the menu, but the graphics goes to black and white.

Using the 'rt' command from 'mged' the image is scrambled in some random fashion, though it seems to fill the window; the 'rt' program works fine from the shell.

Randy is very pleased with the improvements to the Sun driver, i.e. window resizing capability, object illuminate, function buttons, etc. He intends to try and track these bugs down with DBX, but he is having trouble with DBX on SunOS 4.0 and will be calling his Sun rep about it. I told him that I don't know if we will have access to 4.0 to reproduce these problems, but agreed to pass it on for the record. He is confident that with the improvements to dm-sun.c in 3.0, our user base will grow considerably on the Suns (not that I gave him any reason to think that we would be pleased by such a development).

-moss


Date: Wed, 28 Dec 88 1:15:16 EST From: Mike Muuss <mike@BRL.MIL> To: Stay@BRL.MIL cc: ACST@BRL.MIL Subject: pixflip-fb

This evening, I created a new program called "pixflip-fb", which reads in a lot of frames (displaying them along the way), and then uses fb_write() to slap them on the display as fast as possible.

The 3-D (vertex) was able to do it faster than the GTX with current software. I investiagated a bit, and it is not CPU limited (only using about 25% of one CPU). I made the code run a bit faster by some optimization, and it helpped, but not much. It seems that there may be some swapbuffers or video interlace limitation on the number of operations/second that can be done.

This would be worth looking at, as the maximum speed of pixels display on the 4D is certainly a lot less than it should be. Something for our lists. -M


Date: 3 Jan 89 18:42:11 GMT From: Jeff Doughty <sgi!jeffd%norge.SGI.COM@ucbvax.berkeley.edu> Organization: Silicon Graphics, Inc., Mountain View, CA Subject: Graphics from multiple threads To: info-iris@BRL.MIL

Mike Muuss reported some problems with the 4D/20 and the 4D GTX. I can address the problem with multiple threads on the MP machine.

There is a limitation (imposed by software) that only the original parent can access the graphics pipe. The SGI demos available ARE multi-threaded, but only a single process performs graphics. Fixing this limitation is one of our highest priorities for the next major release. Currently, this release is tagged as 4.0, and is scheduled for around October 1989.

For a brief description of what is happening: The graphics pipe is mapped into the user program's address space. When the program fork()s for sproc()s, this pipe is unmapped. Thus when a child process attempts a graphics call that accesses the pipe, it will dump core with a segmentation fault. I noticed that this behavior is not documented in either the fork(2) or sproc(2) man pages -- I will remedy this.

The reason that the graphics pipe is unmapped across a fork/sproc is that a great deal of software relies upon the fact that a pipe context corresponds to a single process. We felt that we could not change this in time for the 3.1 release.

As you read this, this limitation is being remedied. Currently, we are planning to introduce a new "share bit" to sproc (PR_SGRAPH) that indicates that the threads would like to share graphics. If this bit is on, the graphics pipe context will be inherited across the sproc(). The user program will be responsible for ensuring mutual exclusion of pipe accesses.

Jeff Doughty UNIX group


Date: Wed, 11 Jan 89 9:43:13 EST From: "Daniel C. Dender" (SE|andr) <dender@BRL.MIL> To: cad@BRL.MIL cc: dender@BRL.MIL Subject: Bug Report on 3.0

Here are three problems that I have found with Release 3.0 running on an SGI 4D running OS 2.3:

1) In mged, it is possible to object edit an object containing cylinders, push the (rotation) transformations, and end up with cylinders that appear correct. However, when trying to raytrace, the base vectors 'a' and 'b' of the cylinders end up not being perpendicular. Also, radius vectors can be off by a small amount (I think in the area of thousandths of an inch). Raytracing does not work, and the cylinders have to be re-entered so that it will.

2) The ARS does not raytrace correctly (I reported before that it did and just complained a lot, but I was thinking of the last CAD release). The output is black (0,0,0) or sometimes *very* dark. I don't know what surface normals would be output, but at least the thickness is correct when raytraced with rtshot. I need this for a current government contract (within two months?).

3) Running LGT with pop-up menus, no keyboard input is accepted unless prompted for as the result of a pop-up menu selection. (Running without pop-up menus in favor of the curses style menus gives correct behavior.)

Dan Dender


Date: Wed, 11 Jan 89 10:28:28 EST From: "Gary S. Moss" (VLD/VMB) <moss@BRL.MIL> To: phil@BRL.MIL cc: acst@BRL.MIL, moss@BRL.MIL, keith@BRL.MIL Subject: CAD 3.0: on SunOS 4.0 on 4/280 (good news and bad)

Phil, Got a call from Randy L. Smith of MSIC today. He is very pleased with the fixes for the Sun OS4 that I sent him after your note about the mged/SunOS4.0.diffs file. He also said that your suggestion for fixing the material properties pointer (mp) problem, by testing it for MATER_NULL first, worked well. Could you install that fix (I have a copy of it if you need it)?

Now for some more bugs:

1) When attempting to draw dashed lines (subtracted prims) the system hangs (consuming 100% cpu). Randy has reason to believe that the error lies in the specification of 'texP' in routine dm-sun.c:'sun_get_texP'; 'texP' gets passwd to 'pw_polyline'. Apparently the program is hanging up in 'pr_texvec' which is directly or indirectly called by 'pw_polyline'. Perhaps the initialization of 'dotdashed' in 'sun_get_texP' is non-portable to Sun OS 4.0.

2) If you resize the graphics window in MGED, the color map gets messed up (all vectors turn white, or green depending on the system). Moving the window works fine.

All of this is on Sun 4/2[68]0 running OS 4.0. Randy understands that this may not receive prompt attention and will notify us if he runs across a fix.

-moss


Date: Wed, 11 Jan 89 11:22:31 EST From: Earl Weaver (VLD/ASB) <earl@BRL.MIL> To: Mike Muuss <mike@BRL.MIL> cc: butler@stsci.edu, cad@BRL.MIL Subject: Re: How to model coil spring

I whole-heartedly support the proposal of adding the general extruded solid. For example such a primitive could be defined by translating an area (determined by a closed, non-intersecting, planar curve) along a 3-D curve in either of two modes: 1) the planar area is always normal to the tangent of the 3-D curve; or 2) the planar area always keeps the same orientation as at the starting position, but would not be allowed if the the tangent vector of the 3-D curve were ever coplanar with the closed curve within the region of interest the 3-D curve passes through.

As well, I hope you will consider a another primitive found in many CAD systems: a solid of revolution defined by revolving the area determined by a non-self-intersecting planar curve about a specified axis (which must be in the same plane) through a given fraction of full rotation (0 < F <= 1) counterclockwise when viewed from the positive direction. The planar curve may touch the axis, but not cross it.

With the addition of these two primitives (and for the extruded solid only the less general case ["mode 2"] would be needed--but no smooth coil spring capability....), BRLCAD could support the IGES CSG solid modeling specification.

A piece of information: The IGES CSG specification is contained in the proposed new ANSI standard for grapics exchange slated for formal acceptance within the next 6 - 9 months. Some CAD vendors have already implemented the translators for the IGES CSG specification. Hence if BRLCAD had the two translators (in/out), import/export of CSG models from/to other CAD systems could be made with minimum effort (i.e., no need to write a separate conversion program for all CSG models having different parameters formats than BRLCAD but available in IGES format). I would think that libwdb would help make the translator development rather straightforward.

-Earl

P.S. An obvious benefit of BRLCAD supporting the soon-to-be ANSI CSG exchange spec would mean that we would be in a position to accept IGES CSG models from US defense contractors... and not have to do most of the major modeling work for some analyses... Under the DoD CALS program, defense contractors will have to start supplying data in ANSI format for new defense systems...


Date: Wed, 11 Jan 89 21:47:20 EST From: Phil Dykstra <phil@BRL.MIL> To: Earl Weaver <earl@BRL.MIL> cc: cad@BRL.MIL Subject: Re: How to model coil spring

Earl,

Much of the work for adding Prisms (perpendicular extrusion of a curve) to BRL-CAD has already been done. Volumes of revolution is the next most likely primitive to follow. Generalized prisms, where the boundary curve follows the Frenet frame along an arbitrary path (your case 1), will take quite a bit of work, so I wouldn't expect to see them soon.

As for IGES import/export: This would be sort of nice, but I don't think anyone here has it on their "to do" list. When I examined interchange formats a year or so ago, my feeling was to skip IGES and wait for ISO STEP. [You may think that this is a big mistake, with ANSI being behind IGES CSG and all, but I think we can use lessons like GOSIP in the network protocol world to show that the U.S. government isn't likely to stop shooting itself in the foot (and thus in the future go the ISO route).]

- Phil


Date: Thu, 12 Jan 89 11:30:23 EST From: Earl Weaver (VLD/ASB) <earl@BRL.MIL> To: Phil Dykstra <phil@BRL.MIL> cc: Earl Weaver <earl@BRL.MIL>, cad@BRL.MIL Subject: Re: How to model coil spring

Phil,

Perhaps from a position of principle one can hope that the ISO STEP solution would be the clearer choice (and one that I'm not opposed to), but there are some practicalities that need to be considered.

I don't want to spend a lot of time and LOTS of words here describing the relationships among ANSI, IGES, STEP (and more properly called PDES/STEP in the U.S.), but I think I should share a few short words with those who have an interest in this subject.

ANSI is a national standards group whose work is generally accepted in the U.S. (ANSI is the American National Standards Institute) as "standard" (whether good or bad and if bad, the result will be that users will stop using it). It has no affiliation with the U.S. Govt. that I know of. ISO is the international counterpart.

IGES (the Initial Graphics Exchange Specification--somewhat a misnomer because the current work is no longer "initial") is an industry/govt consortium whose initial participants were primarily GE and Boeing and whose goal was to facilitate exchange of "electronic drawings" produced by dissimilar CAD (here the D means drafting) systems some ten years or so ago (I'll get some flak, here, from IGES old timers who contend that their original intents were not limited to engineering drawings...). Needless to say, the IGES work has exploded to cover lots of areas (all graphics oriented) and although not perfect (what is?) has been accepted by most of the US industry and govt. Indeed the Navy requires IGES utilization in some of the BIG Navy contracts (Seawolf, etc.). Many hundreds of "volunteers" from hundreds of companies and some govt agencies do the IGES work which is "managed" by NIST (formally NBS)

Four or five years ago, the French developed their answer (rebuttal?) to IGES called SET and found acceptance at Aerospatiale and maybe a few other european outfits, but the U.S. found drawbacks in it, too. But that work spawned interest in ISO to develop an internationally recognized standard.

Then several years ago, a small group within IGES developed a concept that is based upon a common "database" that could be used throughout the life cycle of a part (or whole system...) from concept to retirement. That concept is PDES (Product Data Exchange Specification) and is "aimed at communicating a complete model with sufficient information content as to be interpretable directly by advanced CAD/CAM applications..."

ISO STEP (Standard for the Exchange of Product Data), "a neutral exchange medium [sic] capable of completely representing product definition data," became an international project and the ISO organization looked primarily to the U.S. for guidance and expertise.

PDES and STEP, in the U.S. are converging to the same thing insomuch as they are commonly called "PDES/STEP." However, the real use of PDES/STEP will not be seen for some time yet. PDES has great interest to the Air Force and the AF would like to see PDES utilized in the ATF (advanced tactical fighter) project, but...

OK, now to the practicalities... Currently the major use of IGES is still in the "electronic drawing" area and has wide acceptance within the U.S. Since the CSG portion of the solid modeling spec has been around for three or four years, some vendors have implemented the translators for that already. And some contractors are now doing solid modeling to support engineering work using their commercial CAD/CAM systems. Thus IGES has a pretty broad base in the U.S. So for the near future (1, 2, 3... yrs?) we can expect that any CAD/CAM data available to us from contractors will be either in IGES format or in the native format of the contractors' CAD systems. Simple arithmetic will show that if available, an IGES file will require much less work to convert than a native format file, especially if there are lots of different CAD systems involved.

The bottom line is that I predict that ISO STEP (or PDES) models useful to us won't be available from U.S. defense contractors until AT LEAST five years and we'd best put our money, at this stage, into IGES. And some (if not most) of the tools we'd develop for IGES should be usable for developing ISO STEP translators

-Earl


Date: Fri, 13 Jan 89 9:05:08 EST From: Earl Weaver (VLD/ASB) <earl@BRL.MIL> To: pom%and.s1.gov@mordor.s1.gov cc: cad@BRL.MIL

Peter,

I applaud your eagerness to do it correctly from the beginning! However, you'd best pick less than two people to arrive at a concensus on this subject within two years. [I base this on many years of work involving volunteers of "workers" on many subjects including graphics 'standards'.]

I suggest you contact Kalman Brauner (if you don't know him) at Boeing (206 251-2222 [assuming he still has the same #]) and discuss your ideas with him. Same for Phil Kennicott at General Electric (518 387-6231), Jeff Altemueller at McDonnel Douglas (314 234-5272), Brad Smith at NIST (301 975-3559). These people, if they're on the ball, should welcome your assistance.

Meanwhile, I re-emphasize my statement of our need here at the BRL for the capability to exploit the current base of IGES-available product definition data and that which would be available for the near future until a better mousetrap comes along. To me the situation is analogous to waiting for the perfect dental solution that prevents all tooth/gum problems rather then submitting to the current picks and drill. I'm not suggesting that the other outfits on this list have the same need.

John Anderson, here, submits that by the time ISO STEP is developed, validated, and accepted by the masses, a new, better (perhaps your proposal) methodology will come to light, and then should we wait for that, too, rather than implement ISO STEP?

And finally, let me make it absolutely clear that I am NOT opposed to the new approaches in the works (PDES/STEP, or whatever), but I think the benefits, to BRL, in using IGES exceed the cost of developing at least the "import" (to BRLCAD) CSG translator.

-Earl


From: Alan Wm Paeth <awpaeth@watcgl.waterloo.edu> Subject: 3-Pass Raster Rotation by Shearing Date: 12 Jan 89 18:01:12 GMT To: brl-comp-graphics@smoke.brl.mil

In article <2901@pixar.UUCP> efo@pixar.uucp (efo) writes: >In article <17963@dhw68k.cts.com> stein@dhw68k.cts.com (Rick Stein) writes: >>I'm curious to know about how one performs rotations on pixel data.

>And the other answer is, yes, you can decompose a rotation into shears >which can be done quite cheaply; see...[Catmull/Smith, SIGGRAPH '80]

The citation previously posted does *not* treat the shearing approach but instead focusses on 2-pass separable forms (Smith reprises the 2-pass methods in SIGGRAPH '87 pp. 263-272). The work has direct application in image warping.

3-pass rotation using shear matrices appears in:

``A Fast Algorithm for General Raster Rotation'' Paeth, A. W. Proceedings, Graphics Interface '86, Canadian Information Processing Society, Vancouver, pp 77-81. Shearing (i.e. no image stretching or scaling) yields a very efficient inner- loop despite the extra pass, but leaves the technique rotation-specific. It also allows for 90-degree rotation (which 2-pass techniques do not): here it generalizes the 90deg 3-pass shuffle used by the Blit terminal and elsewhere (eg: Kornfeld, C. ``The Image Prism: A Device for Rotating and Mirroring Bitmap Images'' IEEE Computer Graphics and Applications 7(5) pp 25.).

The scale invariance during rotation can be used nicely in making circle or polygon generaters that are provably correct. The matrix forms in the paper shed light on how and why the following oft-cited bit of CS lore works:

/* Find next CW point along circle perimeter, start with [X,Y] = [1,0] */

X' = X + eps*Y;

Y' = -eps*X'+ Y; /* carry down __X'__, not X for best results! */ As proposed by Newman and Sproull (vol I only), ascribed to I.E. Sutherland, and mentioned recently in Jim Blinn's Corner (Algorithm 6 in ``How Many Ways Can You Draw A Circle, IEEE CG&A, August, 1987). Adding a third line:

X'' = X' + eps*Y'; /* carry down some more! */ corresponding to the third shearing pass (plus some one-time-only trig to find eps in statement two, because eps[1]=eps[3]<>eps[2]) now gives rise to a circle drawer in which "eps" can be large enough to draw closed n-gons with impunity.

Incidentally, the earliest reference I find to three-pass shear matrix techniques goes to C. F. Gauss. His application was, ironically: Ray Tracing! (Three shear passes occur for each refraction-transfer-refraction of a ray through a lens. See Blaker, J. W., __Geometric Optics -- The Matrix Theory__, Marcel Dekker, (New York) 1971).

I have previously posted the fuctional source appearing in the raster rotation paper; I'm happy to mail interested parties (or post given sufficient interest) a TeX document which refines the above circle-generator discussion along the lines of Blinn's CG&A article.

/Alan Paeth Computer Graphics Laboratory University of Waterloo


Date: Sat, 14 Jan 89 14:22:50 PST From: pom%and.s1.gov@mordor.s1.gov To: cad@BRL.MIL

I (pom) opined that time may be right to move beyond IGES; I said: pom: "It is possible to set a 'two years goal' and evolve consensus on something, a draft, which is workable. Alternative is to always work with improvisations and fixes."

Earl responds: " I re-emphasize my statement of our need here at the BRL for the capability to exploit the current base of IGES-available product.."

pom: No contest. Mine was a technical comment on the state of the art(less standard) not on BRL needs. When IGES was developed, the interactive 3D graphics and Finite Elements (FE) were expensive exotic frills. IGES is fine for archiving and transmitting drawings but todays interactive environment and simulation has many more aspects. E.g, there are many locations developing grid editors for FE models ; without some standard 'we' are likely to that same work 20 times and spend rest of the time on writing translators. (BRL is just one of the 'we'). Earl: And finally, let me make it absolutely clear that I am NOT opposed to the new approaches in the works (PDES/STEP, or whatever),... .......[BUT]... by the time ISO STEP is developed, validated, and accepted by the masses, a new, better ... methodology will come to light, and then should we waitfor that, too, rather than implement ISO STEP? pom: I do not think that you are 'opposed to the new approaches in the works' , nobody is. The point is not to have the 'best&latest' . Issue is when and what is expected return on which investment of resources.

Earl: However, you'd best pick less than two people to arrive at a concensus on this subject within two years.. I suggest you contact ..."

pom: Agreed. I have been on the ANSI committee, which adopted IGES and know it takes time. Lot of that consensus reaching time is consumed in the mail read&comment cycles - particularly in the later phases. I would not want to do THAT again. The agreement on the ISO forum will undoubtedly be even more slow and difficult. BUT !!!! reference to ISO or ( PDES/STEP) DOES NOT mean that one has to wait for THAT. IGES /for example/ was developed by a small working group and only later brought to the party. Such a kernel may be in use long before it is formalised. And use of e-mail should make the whole process much faster . To review current ISO proposals would be a good starting point. Are you saying it is impossible to collate a wish-list and have group ON THE NET which would agree on a specification of a [small but extensible] language kernel in two years? Something not perfect but much more versatile and readable then IGES?

If you are not, then we TWO have already reached a consensus on a rather important point. Peter (415) 422-7328

pom@under.s1.gov |or| pom@s1-under.UUCP

mail cad@brl.mil < IGES_J


From: Jim Walsh <jimw@microsoft.uucp> Newsgroups: comp.graphics Subject: Re: Stereoscopic 3D Flight Simulator Date: 15 Jan 89 03:47:46 GMT To: brl-comp-graphics@smoke.brl.mil

In article <45@sdcc10.ucsd.EDU> cs161agc@sdcc10.ucsd.edu.UUCP (John Schultz) writes: > > I'm writing a simple flight simulator using a stereoscopic 3D >display. I'm looking for a text on flight simulation, the only one >I know of is: > Garrison, Paul > Microcomputers and Aviation > New York: Wiley Press, c1985 > >UCSD doesn't have the book; is it a good text? Know of any others? >

You might want to try Applied Concepts in Microcomputer Graphics Bruce A. Artwick Prentice-Hall, 1984 ISBN 0-13-039322-3

Bruce Artwick is the president of Sublogic Corporation, and a number of the algorithms, illustrations, etc. in the book refer to simulations (flight simulations in particular). I haven't referred to the book in quite a while, so I can't really remember what I thought about it, but I'm sure that it will help in a number of areas. A quick glance through it shows sections on viewer perspectives and appropriate transformations, various ways to speed image generation, etc.

-- Jim Walsh jimw@microsof.beaver.cs.washington.EDU Microsoft Corporation jimw@microsof@uw-beaver.ARPA jimw@microsof.UUCP The views expressed herein are not necessarily mine, let alone Microsoft's...


From: "Thomas J. Gilg" <tomg@hpcvlx.hp.com> Newsgroups: comp.graphics Subject: Re: Request: Video output circuit design help needed Date: 16 Jan 89 17:27:51 GMT To: brl-comp-graphics@smoke.brl.mil

RJ> I am looking for information and/or help on how to design a video RJ> display driver. I have a 1 bit deep display memory and I want to end-up RJ> with a circuit that will shift this out to a monitor. The design will RJ> generate the pixel stream and the horizontal and vertical sync pulses RJ> along with any signal level shifting necessary (I believe this to be all RJ> I will need, please correct me if I am wrong.)

RJ> ........................... Please no "just use a 6845" or "why not a RJ> 7220?" replies. I have looked at these and other, newer chips and they RJ> do not interest me.]

You are going to have a challange here. Usually, a controller chip provides atleast two sets of controls:

1. Video Sync Lines 2. Shift register downloading and timing

There are a large number of timing parameters that someone must generate. Read on.

RJ> The design will be implemented using PALs, shift registers and binary RJ> counters. I already have a design for the RAM dual-port circuit. What RJ> I need is help designing everything that deals with the raster image RJ> after the CPU has put it into memory. I am hoping to end up with a RJ> display on the order of 1024 X 768.

I would suggest going with atleast 16-bit wide downloads to the shift registers if your going 1024x768 or above. That way, your access rate to memory will be 1/16 the pixel rate.

RJ> In addition I need to find out a few facts about monitors. Is a 60 Hz. RJ> display capable of displaying more lines if you run it at only 30 Hz.?

I think that once a monitor is set at 30 Hz or 60 Hz, your stuck. What can be changed is the way you feed the monitor. You can opt to feed it ONE image every frame, or two interlaced images every two frames.

RJ> Can you up the pixel clock and feed more pixels per scan line? Does all RJ> this require surgery on the display's sweep circuitry or is there such a RJ> thing as a magic, self compensating circuit?

If you have a fancy controller, I don't see why not, but the pain may not be worth it. What you do while the scan line is ACTIVE is up to you ( i.e., feed it all the pixels you want ( see beam size comment ) ). The critical thing is that you observe the fact that the scan line will only be active for a certain fixed amount of time, and that you have certain sync parameters to consider. I don't know hpow multi-sync monitors play here ?? Any multi-sync experts ??

RJ> It seems to me that a RJ> 25Mhz display is capable of displaying more that 720 X 3XX(?) pixels, RJ> especially if you go from 60 (or 50) to 30 Hz refresh. I suppose that RJ> there are limits based upon the phosphor's particle size and/or the RJ> electron beam size.

Here are some specs for a 1280x1024 monitor I know of. Remember, someone must keep track of all these milli-seconds ( i.e., your pixel stream IS NOT a continuous flow of un-interrupted bits ),

Timing ( kinda rounded off in some cases - roughly RS-343A specs ):

Horiz. sweep freq: 63.3 Khz Vert. Frame Rate: 60.0 Hz ( non-int) period: 15.8 us active period: 16.17 ms front porch: .41 us blanking: .505 ms sync width: 1.7 us period 16.7 ms back porch: 1.85 us front porch 47.4 us active scan: 18.8 us sync width: 47.4 us blanking: 4.5 us back porch: 410.5 us

Usually, the whole timing mess grinds down to finding your base clock rate so that all the above timing figures work out as integer multiples of it ( e.g., vert. front porch is equal to 3 scan lines ). Then whatever resolution works out is what you get. Usually, I find that you get some specs such as above, work forward to find the resulting resolution, gripe that its too weird ( 1258 X 1013 ), work backwards and change some of your initial parameters, making sure to stay within the specs, etc.....

Good luck !!

Thomas Gilg tomg%hp-pcd@hplabs


Date: Tue, 17 Jan 89 10:51:58 EST From: Earl Weaver (VLD/ASB) <earl@BRL.MIL> To: pom%and.s1.gov@mordor.s1.gov cc: cad@BRL.MIL

Peter,

It looks like we're in agreement on philosophical principles. But on the practical side of the issue, we may still have differences of opinion.

Remember that my original msg to Mike stated the benefit to BRL of using IGES. When Phil responded [and essentially compared IGES to STEP] I offered a very short discussion on the history and relationships among IGES, STEP, ANSI, etc., and emphasized the usefulness of IGES to BRL here and now.

You responded by agreeing with Phil that waiting until ISO STEP was the better choice ("I agree with Earl: [BRL using IGES is] practical and justifiable - but still a waste of resources"), but then proposed the development and implementation of better mousetrap (by reaching a consensus among knowledgeable participants within a two-year period).

I answered by clarifying my position and again restating the reason why BRL would benefit here and now with IGES [V 4.0].

All along my point has been that IGES has not only been accepted by many US vendors, but that it was being used. I don't care if IGES is a poor substitute for better methodology; until US industry actually accepts and implements a better mousetrap, we [BRL] would gain nothing by either waiting for ISO STEP or developing a better near-term mousetrap UNLESS we could gain immediate participation by Sikorsky, McDonnell Douglas, Bell, Boeing, Northrup, General Dynamics, Lockheed, Grumman, Martin Marietta, etc., etc., which, in my opinion, is unlikely to happen in spite of the nobleness of the cause.

I'm glad you are aware of the trials and tribulations of the ANSI process. (For the record, after the ANSI exercise of adopting IGES V 2.0 in which you participated [I have most of the original paperwork of THAT exercise and am really glad you folks were the ones that broke the ice, because that experience made the follow-up task(s) much easier], that ANSI subcommittee went into hibernation. When it reactivated, I was asked to help out and subsequently got stuck with the job resolving the ballot comments when IGES V 3.0 was balloted. We're currently in the process of bringing up V 4.0 to ballot [and will most likely be involved in PDES/STEP when it matures].) I hope you'll agree that V 4.0 bears little resemblance to V 2.0, although still deficient by PDES/STEP "standards."

Your proposal for a dialogue of ideas and a wish list is a good one. And so is the "two-year" project. If successful, there is a very good possibility it would influence and make an impact on the PDES/STEP (and ultimately ISO STEP) work.

Since you agree that your concerns are not in the BRL/IGES mutual benefits area and I agree that the industry need something more complete than the current IGES, let's drop this discussion from this cad list and continue privately and with other folks who have an interest. That poor soul who asked how to model a coil spring will be afraid to ask another question!


Date: Tue, 24 Jan 89 6:14:26 EST From: Mike Muuss <mike@BRL.MIL> To: ACST@BRL.MIL Subject: if_gt.c

I have improved if_gt.c so that it can now handle /dev/sgi0f (full screen). Problem was that lrectwrite & rectcopy use window-oriented coordinates, not viewport-oriented coordinates.

I also tagged some sections where performance improvements might be possible. Mostly as a reminder to me, when I try to add the multi-tasking support.

FYI, I have temporarily connected the Dunn's RS-232 in place of the VAS4, so that I could shoot some slides for the National Geographic. -M


Date: Tue, 24 Jan 89 11:41:35 EST From: "Gary S. Moss" (VLD/VMB) <moss@BRL.MIL> To: "Daniel C. Dender" <dender@BRL.MIL> cc: cad@BRL.MIL Subject: Re: Rel 3.0 Manuals

< I don't know if this is just in my copy of the BRLCAD 3.0 manual or not, < but I didn't find a manual page on 'lgt'. Heh, you're right, something got screwed up somewhere, but you should have an on-line copy. If you want to make a hardcopy; 'tbl lgt/lgt.1 | troff -man' should work. Make sure you read the report on 'lgt' toward the back of the 3.0 manual, it is much more helpful than the manual pages. -moss


From: Jerry Kallaus <kallaus@leadsv.uucp> Newsgroups: comp.graphics Subject: Re: 3D Rotations/Instancing Date: 26 Jan 89 02:43:51 GMT To: brl-comp-graphics@smoke.brl.mil

In article <65@sdcc10.ucsd.EDU>, cs161agc@sdcc10.ucsd.EDU (John Schultz) writes: > > > Problem: Need to rotate an instance of an object from a database, > then rotate the rotated instance by the previous rotated instance: > > [currentinstance] = [currentinstance][newinstance] > > Where the above are each matrices. One of the problems is decay, > as rotations iterate, decay increases and the object falls apart,

I believe there is a procedure called "Grahm-Schmidt orthonormalization" that functionally does what you want, but it may be computationally expensive. I can't find it right now. You may want to look it up.

Rotation matrices are orthonormal matrices which have the property that they are their own inverse. You might consider the following. Let A be estimate of current orthonormal matrix. E matrix of errors of elements in A. Then (A-E)* (A-E) = I A*A - 2A*E + E*E = I (If E << I, and E << A, then E*E ~= 0 ) E ~= 1/2 ( A*A - I ).

So compute improved estimate as

A' = A - 1/2 ( A*A - I )

If this works at all, it will only work when the errors are relatively small. Basic cost is one matrix multiply. Note that no magnitudes involving square roots are involved. --- Jerry Kallaus {pyramid.arpa,ucbvax!sun!suncal}leadsv!kallaus (408)742-4569 "Funny, how just when you think life can't possibly get any worse, it suddenly does." - Douglas Adams -- Jerry Kallaus {pyramid.arpa,ucbvax!sun!suncal}leadsv!kallaus (408)742-4569 "Funny, how just when you think life can't possibly get any worse, it suddenly does." - Douglas Adams


From: Thant Tessman <thant@horus.sgi.com> Newsgroups: comp.graphics Subject: Re: Ray tracing refraction Date: 30 Jan 89 18:31:56 GMT Sender: daemon@sgi.com To: brl-comp-graphics@smoke.brl.mil

In article <0XryqWy00Uo1875Ud-@andrew.cmu.edu>, po0o+@andrew.cmu.edu (Paul Andrew Olbrich) writes: > Hi-ho, > > I'm trying to add refraction to a ray tracing program I'm writing in C. Could > someone help me out a bit?

Warning: I'm doing this from memory, and I've never written a ray-tracer.

\<-theta1->| \ | \ | incoming ray-> \ | <- surface normal \ | \ | \ | \ | <material with index of refraction n1> \ | \ | \| -------------------------------------------------- <- surface | | \ | \ | \ <material with index of refraction n2> | \ | \ | \ <- outgoing ray | \ | \ | \ |<-theta2->\

sin(theta1) sin(theta2) ----------- = ---------- n1 n2

(theta1 and theta2 are angles)

If n1 is 1 (like for a vacuum) and n2 is bigger than 1 (like glass) then theta2 is smaller than theta1. Note that if it is the other way around, that is, the ray goes from an optically dense material to an optically less dense material, (like a ray coming out of glass), then theta2 is bigger than theta1. (Maybe this is where you are screwing up?)

Also, if you are solving for sin(theta2) and you get a number bigger than 1, it means that the ray should be totally internally reflected. This is why when you look up from the bottom of a pool, everything above the water looks like it is in a round window above your head.

There are other considerations. The ratio of light reflected to transmitted is dependent on the angle but I can't remember that stuff.

Also, reflected and transmitted light is polarized (also dependent on the angle). Pelicans have polarized filters in their eyes to see fish in the water better. Bees have polarized eyes to navigate with sunlight on cloudy days.

Some crystals have different indecies of refraction for light polarized in different directions. When you look through them you see two images.

And on and on...

Thant Tessman (thant@sgi.com) "make money, not war"


From: "John B. Nagle" <jbn@glacier.stanford.edu> Newsgroups: comp.graphics Subject: Re: 3-D perceptual abilities Date: 31 Jan 89 18:03:20 GMT Keywords: TV 3-D graphic To: brl-comp-graphics@smoke.brl.mil

In article <1104@nic.MR.NET> jjc@sun1.UUCP (Jon Camp) writes: > >1) As Benie Cosell posted, 3-D perception is much more than stereopsis. It >involves parallax, focus, accomodation, obscuration, perspective, memory, a >great many other functions which I am not aware of and most likely some >that no one has ever measured. In our everyday lives, stereopsis is not >even the primary means of depth perception. Stereopsis IS, however, >relatively inexpensive to simulate, and is therefore the only contact >most people have with "3-D display".

This subject has been studied in some detail by developers of flight simulators. See "Flight Simulation", by J.M Rolfe and K.J. Staples, ISBN 0-521-35751-9, section 7.2, "The Psychophysics of Visual Perception". They identify eight main non-binocular cues of distance, which I will not give here. I do recommend this book to anyone involved in the generation of realistic imagery.

>3) The common wisdom is that stereopsis is most effective within the reach >of our hands.

"Flight Simulation" references T. Gold, 1972, "The Limits of Stereopsis For Depth Perception in Dynamic Visual Situation", Society for Information Display, International Symposium, Digest of Technical Papers, who reports that stereopsis dominates differential size and motion parallax out to about 17m (64m if the observer fixates his eyes on the moving object.) This is with the observer moving at about 0.5m/sec. Faster movement brings the limit closer. This is somewhat beyond the reach of the hands, and in fact stereo vision systems have been built for in-flight refueling simulators.

>4) As one who has experience viewing stereo and other 3-D representations, >I wish to report that stereopsis alone gives me a sensation of "viewing >fatigue", possibly because stereo so vividly presents SOME depth cues >while perversely witholding others. This is a personal experience, NOT a >rigorous criticism of stereoptic display.

Viewing fatigue for 3D imagery is a serious problem. The phenomenon is moderately well understood, and has been written up in technical papers of the SMPTE, from the point of view of understanding how to make 3D movies. When viewing images that are not in the same scale as real life, some rather strict rules must be followed to avoid visual fatigue. Unfortunately, I don't have the paper around, but it was by someone in Hollywood who provides 3D gear to filmmakers. One useful gadget they offer is a pocket calculator preprogrammed with the calculations needed to set up a shot for 3D. For close-ups, this is non-trivial. They also offer a special leader for 3D films that allows the projectionist to align the system properly. Failure to do this correctly will induce headaches in some of the audience.

John Nagle


From: Thilaka Sumanaweera <sumane@anaconda.stanford.edu> Newsgroups: comp.graphics Subject: Re: ray tracing refraction Date: 31 Jan 89 22:19:40 GMT To: brl-comp-graphics@smoke.brl.mil

> Is index of refraction linear with wavelength?

Maxwell's equations yeild the following: Complex index of refraction of a given material:

M = n - jK

where, n = m e { 1 + sqrt[ 1 + square( L s / 2 pi c e) ] } / 2 r r

K = m e { -1 + sqrt[ 1 + square( L s / 2 pi c e) ] } / 2 r r

m = Relative permeability of the material r e = Ralative permittivity of the material r L = wavelength of light s = conductivity of the material pi= 3.14... c = speed of light in the material e = permittivity of the material

Source: Principles of Optics - Born and Wolf, Pergamon Press 1959

For dielectric materials like glass, s = 0. Therefore K = 0 leaving only the real part for a good approximation.

Hope this is helpful.

Thilaka


Date: Wed, 1 Feb 89 20:40:56 EST From: Phil Dykstra <phil@BRL.MIL> To: Paul Tanenbaum <pjt@BRL.MIL> cc: cad@BRL.MIL Subject: Re: Cursor in libfb

Unfortunately cursors with color weren't part of the original "frame buffer model" that we adopted in libfb. Often the color of a hardware cursor can't be set, or it comes from its own color map, or it is defined by a boolean operation, etc. The fb_setcursor() routine only defines the size, shape, and active point, for hardware with changeable cursors - no color.

We could try to add a command for this but it would be hard to make it portable. You might want to consider doing something like the following for hardware specific graphics commands:

#ifdef sgi /* See if we are using a local SGI display */ if( strncmp(fbp->if_name, "/dev/sgi", sizeof("/dev/sgi")) == 0 ) { /* SGI one color cursor */ mapcolor( 1, red, green, blue ); } #endif

[In reality the SGI is more difficult because there are three different kinds of cursors available.]

Games like this of course wont work for remote frame buffers.

- Phil



From: Paul Heckbert <ph@miro.berkeley.edu> Subject: Re: Ray tracing refraction Date: 2 Feb 89 04:54:49 GMT Sender: news@pasteur.berkeley.edu To: brl-comp-graphics@smoke.brl.mil

Here's some C code to compute the refracted ray direction.

Aside to Thant: Actually, Snell's law is n1*sin(theta1)=n2*sin(theta2); you were using the reciprocals of the indices of refraction.

Below is an excerpt of some notes I wrote for the Intro to Ray Tracing SIGGRAPH tutorial. (These notes are coming out soon as a book from Academic Press, by Glassner, Arvo, Cook, Haines, Hanrahan, and Heckbert. Included in the book is a derivation of the following formulas from Snell's Law, which I would have included here except it's written in eqn and troff and uses paste-up figures).

---------------------------------

Below is C code for SpecularDirection and TransmissionDirection, routines which compute secondary ray directions. The following formulas generate unit output vectors if given unit input vectors.

/* * SpecularDirection: compute specular direction R from incident direction * I and normal N. * All vectors unit. */ SpecularDirection(I, N, R) Point I, N, R; { VecAddS(-2.*VecDot(I, N), N, I, R); }

/* * TransmissionDirection: compute transmission direction T from incident * direction I, normal N, going from medium with refractive index n1 to * medium with refractive index n2, with refraction governed * by Snell's law: n1*sin(theta1) = n2*sin(theta2). * If there is total internal reflection, return 0, else set T and return 1. * All vectors unit. */ TransmissionDirection(n1, n2, I, N, T) double n1, n2; Point I, N, T; { double eta, c1, cs2;

eta = n1/n2; /* relative index of refraction */ c1 = -VecDot(I, N); /* cos(theta1) */ cs2 = 1.-eta*eta*(1.-c1*c1); /* cos^2(theta2) */ if (cs2<0.) return 0; /* total internal reflection */ VecComb(eta, I, eta*c1-sqrt(cs2), N, T); return 1; }

where double VecDot(A, B) dot product: returns A.B VecComb(a, A, b, B, C) linear combination: C = aA+bB VecAddS(a, A, B, C) add scalar multiple: C = aA+B

typedef double Point[3]; /* xyz point data type */ Point A, B, C; double a, b;

-------------

Whitted gives formulas for the refracted ray direction in his classic paper on ray tracing: "An Improved Illumination Model for Shaded Display", CACM, June 1980, but the formulas above compute faster. It's a fun exercise in trig and vector algebra to prove that Whitted's formulas are equivalent.

Paul Heckbert, CS grad student 508-7 Evans Hall, UC Berkeley UUCP: ucbvax!miro.berkeley.edu!ph Berkeley, CA 94720 ARPA: ph@miro.berkeley.edu


From: Dave Martindale <dave@onfcanim.uucp> Newsgroups: comp.graphics Subject: ShowScan vs. IMAX Date: 1 Feb 89 18:11:29 GMT To: brl-comp-graphics@smoke.brl.mil

In article <18070@glacier.STANFORD.EDU> jbn@glacier.UUCP (John B. Nagle) writes: > > Showscan is a straightforward movie system using 70mm film at about 60 >frames per second. At this speed, the illusion of motion achieved at >24 frames per second is much improved. The screen is also made sufficiently >large to cover the entire human field of view. The overall effect is said >to approximate reality. It can be considered a benchmark as to how good >a display system has to be before it disappears and becomes a virtual reality.

Time to mention the *other* high-definition film system - IMAX/OMNIMAX:

IMAX is a system that uses 15-perforation frames running horizontally - each frame uses 3 times the film area as Showscan. IMAX runs at a conventional 24 frames per second. OMNIMAX use the same film, camera, and projectors as IMAX, except that the camera uses a sort of fisheye lens to obtain a 180 degree field of view, and the image is projected onto the inner surface of a dome.

Showscan and IMAX consume just about the same amount of film per second _ Showscan runs 2.5 times as fast as normal 70mm but the frame is the same size, while IMAX uses 3 times the film area at the normal frame rate. So the "data rate" is the same, but the two systems use it differently.

The Showscan screen may be quite *wide*, but it's not very high. The frame size is the same as normal theatrical 70mm, 2.072 x 0.906, 5 perforations per frame. IMAX frames are 2.032 x 2.772 inches. (Both figures are camera apertures; I'll ignore projector apertures for the sake of simplicity).

If we pick an arbitrary screen width of 60 feet, a Showscan screen will be only 26 feet high while an IMAX screen is 44 feet high. Thus, IMAX does a much better job of "covering the human field of view" vertically. Showscan does have the advantage of a brighter image on screen, mostly because it is covering less screen.

When covering the same screen width, a Showscan frame is being magnified 34% more than an IMAX one, so each IMAX image is sharper. On the other hand, projecting more images per second increases apparent sharpness, so Showscan probably doesn't lose anything here.

Showscan uses standard 70mm equipment, modified a bit. IMAX uses custom-built cameras with 4 registration pins and a vacuum pressure plate to hold the film flat. The projectors use fixed registration pins and hold the film flat against a glass block using compressed air. As a result, IMAX has very little image jitter on screen compared with any other film format. Note that this is not a fundamental difference between the two formats, just a practical one.

So what's the net result? For rapid motion, either of the camera or the subject, Showscan looks great while IMAX images flicker and strobe. But when the camera and scene are stationary or moving slowly, IMAX gives a "window into the world" quality that Showscan can't, because of the narrower field and more jitter. (If you get a chance, see the footage of the earth shot from space in "Hail Columbia" - you'll see what I mean.) Which system is better depends on the subject matter.

By the way, to compare this to computer displays, a good IMAX projection print will have 40 lp/mm resolution. That's 80 pixels/mm, 2032 pixels/inch, for an equivalent resolution of 5600 x 4130 pixels. What sort of computer display hardware would be needed to generate that at 24 fps? As someone else pointed out, flight simulator people are working on generating high resolution only where the pilot is currently looking, with quite low resolution elsewhere, to minimize computing. However, you can't do that with an audience of more than one.

About 3D: IMAX and Omnimax are particularly good systems for 3-D. Vertical misalignment between the two images of a stereo pair gives people headaches, and vertical jitter produces this sort of misalignment. IMAX has very low jitter, and any jitter that remains is mostly horizontal. Omnimax has the potential of really surrounding you with a 3-D environment.

3D films have been made in both IMAX and Omnimax. If there is interest, I can re-post an article on IMAX 3D that I originally posted about 3 years ago.

Dave Martindale


Date: Wed, 8 Feb 89 11:20:43 EST From: Paul Stay <stay@BRL.MIL> To: cad@BRL.MIL, cmoore@BRL.MIL Subject: [Paul Stay: [Paul R Stay: rle files in BRLCAD 3.0]]

Carl here is a note I posted which explains the new format. I guess you missed it the first time around.

----- Forwarded message # 1:

Date: Wed, 8 Feb 89 11:17:08 EST From: Paul Stay <stay@brl.mil> To: stay@BRL.MIL Subject: [Paul R Stay: rle files in BRLCAD 3.0]

----- Forwarded message # 1:

Date: Sat, 24 Sep 88 2:12:51 EDT From: Paul R Stay <stay@BRL.MIL> To: vmb@BRL.MIL Subject: rle files in BRLCAD 3.0

FYI

With the new release of BRLCAD 3.0 the following has changed with reference to rle images.

There are three different types of rle images: versions 1, 2, and 3 The latest rle software from utah uses version 3. and the various programs which use rle in brlcad have been changed to to use the new library which is much more portable, and many new tools for changing and combining rle images are now included from Utah. Version 2 rle files can be read from version 3 programs but version 1 files cannot be read with the new software (version 3 ).... Since Nov 1987 we have been using the version 2 format and previous to that we used version 1. I have gone through the /demo directory and converted all version 1 files to version 3.

I was unable to go through each individuals directories and change the rle files which were located outside the demo directory. To test your rle files and convert them you can to the following...

rlehdr file.rle this will tell you if its not an RLE file which means its probably a version 1 file.

If rlehdr does not recognize the rle format. than you can convert it by the following sequence of commands.

1> fbcmap 0 2> orle-fb file.rle 3> fb-rle file.rle.new #verif that it worked. 4> rle-fb file.rle.new works 5> mv file.rle.new file.rle

Since disk space is usually small its best to use a real frame buffer for conversion. Version 2 and version 1 files can still be read by the orle-fb program...

If you have questions regarding this or need help Please send me mail and I will get to you on Monday.

-Paul

----- End of forwarded messages

----- End of forwarded messages


From: David Jones <djones@polya.stanford.edu> Newsgroups: comp.graphics Subject: details of Pulfich Effect (SuperBowl 3D effect) Date: 7 Feb 89 00:32:30 GMT To: brl-comp-graphics@smoke.brl.mil

Although there have been many postings explaining how the SuperBowl 3D effect worked, I thought those of you wondering about the details may be interested in the following paper just published. Pulfrich published his paper in 1922 by the way.

A Physiological Correlate of the Pulfrich Effect in Cortical Neurons of the Cat Thom Carney, Michael Paradiso, Ralph Freeman Vision Research (1989), vol29 no2, pp 155-165

The abstract says: "We found that placing a filter before one of the cat's eyes produced a temporal delay in the cortical response."

From a quick skim of the paper, the size of the delays they find are roughly 11 and 29 milliseconds, for 1 and 2 log unit filters.


Date: 10 Feb 89 17:34:09 GMT From: Gavin Bell <sgi!gavin%krypton.SGI.COM@ucbvax.berkeley.edu> Organization: Silicon Graphics, Inc., Mountain View, CA Subject: Re: Personal IRIS benchmarks To: info-iris@BRL.MIL

We quote 5,900 Z-buffered, Gouraud shaded, 4 sided 100x100 independent polygons per second on the Personal Iris. The 100,000 polygons/second figure you heard is for the GTX products.

Will you get 5,900 polygons per second in your application? Not if: 1) You spend any time computing the polygons, or spend any time re-organizing the vertex data to match the v() commands. 2) You spend any time clearing your window or z-buffer (remember, it takes ~10 microseconds to clear the screen, so at 30 frames/second ~20 percent of your time is spent just clearing the framebuffer). 3) You have big polygons (bigger than 10 by 10 pixels). 4) You use the old drawing commands. 5) You draw few polygons in double-buffered mode. Worst case is drawing one polygon, then swapping buffers-- the swapbuffers() command has to wait for the vertical retrace of the monitor you are using, so you will get only ~60 polygons/second.

The benchmark used to get the 5,900 poly/sec number is, of course, nowhere close to a real application. It is single-buffered, never clears the framebuffer or z-buffer, and has almost no CPU overhead, and draws only 100 pixel polygons. But it does give you an idea of maximum drawing speed.

--gavin (gavin@sgi.com)


Date: Mon, 13 Mar 89 10:13:04 EST From: Mike Muuss <mike@brl.mil> To: David F. Rogers <dfr@usna.mil> cc: ACST@BRL Subject: Re: Dials & buttons

Yes, all our SGI's have the dials and buttons. We use them both in MGED (big program) and in PL-SGI (small program); both are in the CAD Package.

I'll send a copy of pl-sgi.c under separate cover, although it has not changed since Release 3.0.

If you say "pl-sgi < /dev/null", press rightmouuse and select the "Axis" menu item, you can get a simple test.

Turn the Zoom knob five (5) full revolutions clockwise to enlarge the purple axis display. Then experiment with the 3 rotation and 3 xlate knobs.

Press RESET button to return to original (small) view.

As another test, try:

pixhist3d-pl /unix | pl-sgi

This will give a complex 3-d display. Start by turning the Zoom knob one revolution counter-clockwise.

Best, -Mike

KNOB LAYOUT:

Xrot Xxlate Yrot Yxlate Zrot Zxlate n/a Zoom

BUTTON LAYOUT (top row only):

Ortho Perspective RESET Zero_Knobs


Date: Thu, 13 Apr 89 5:08:11 EDT From: Mike Muuss <mike@BRL.MIL> To: CAD@BRL.MIL cc: Moss@BRL.MIL, Gwyn@BRL.MIL Subject: parallel librt usage

If you call LIBRT from a parallel application, and have your application structure a_resource pointers pointing to one of an array of resource strutures (rather than using the default uni-processor resource structure supplied by rt_shootray), then in Release 3.1 of BRL-CAD, one more line of code will be required:

resource[cpu].re_magic = RESOURCE_MAGIC;

This is because each resource structure is given a "magic number" to check for valid pointers and memory corruption.

If you are concerned about maintaining source for both pre- and post- Release 3.1 systems, this code can be #ifdef'ed:

#ifdef RESOURCE_MAGIC resource[cpu].re_magic = RESOURCE_MAGIC; #endif

Sorry that an interface change was necessary, but this change allowed Phil and I to catch a bug in the Gould C compiler that had been haunting us for many weeks now. It seems worthwhile that the magic number protection should be carried forward in new versions.

Best, -Mike

PS: As usual, this only applies to the development version on SPARK. (Until the release).


Date: Mon, 17 Apr 89 17:31:54 EDT From: Mike Muuss <mike@BRL.MIL> To: CAD@BRL.MIL Subject: xxx_prep() change

Internal to LIBRT, I have reduced the number of parameters to the ft_prep() routines from 6 to 3. This affected the startoff of all the geometry routines (eg, g_arb.c, g_sph.c, etc), but no application code should care.

This should provide a minor performance improvement -- the motivation was due to some general interface cleanup. -Mike


Date: Fri, 21 Apr 89 7:04:21 EDT From: Mike Muuss <mike@BRL.MIL> To: CAD@BRL.MIL Subject: Sliders, etc

This evening, I have integrated Bill's slider code into MGED, with some enhancements; the sliders seem to work fine both with and without a button box.

Lee has provided several new programs for the util directory, including pix-sun, alias-pix, pix-alias, pixcolors.

Pixcolors is the most notable of these -- it gives a list of all the unique colors used in an image, by maintaining a 2^24 bit bit-vector in memory. This is a neat new capability, especially useful when preparing images for display devices with limited color resolution (eg, Mac II).

This completes the list of "features pending for Release 3.1", the next step is to finish the integration and testing phase. Best, -Mike


Date: Tue, 25 Apr 89 18:51:50 EDT From: Mike Muuss <mike@BRL.MIL> To: CAD@BRL.MIL cc: Info-Alliant@mcs.anl.gov, Rabiner@BRL.MIL Subject: Alliant 3.0 cc bug

Integration testing for the latest BRL-CAD release has disclosed a problem with the C compiler (/bin/cc) on Concentrix 3.0. I have not been able to check the compiler on later releases.

With these data structure:

struct aface { fastf_t A[3]; /* "A" point */ fastf_t N[3]; /* Unit-length Normal (outward) */ fastf_t NdotA; /* Normal dot A */ }; struct arb_specific { int arb_nmfaces; /* number of faces */ struct oface *arb_opt; /* pointer to optional info */ struct aface arb_face[4]; /* May really be up to [6] faces */ };

This code fragment fails:

for( i=0; i < pa.pa_faces; i++ ) arbp->arb_face[i] = pa.pa_face[i]; /* struct copy */

This is the workaround:

{ register struct aface *aip, *aop; aip = &pa.pa_face[pa.pa_faces-1]; aop = &arbp->arb_face[pa.pa_faces-1]; for( i=pa.pa_faces-1; i>=0; i--, aip--, aop-- ) { *aop = *aip; /* struct copy */ } }

Gary Moss' earlier remarks that the Alliant sometimes has problems with structure copies are certainly true.

Since this workaround is more efficient than the earlier version, I have adopted it as the installed version of the code. However, we should continue to be vigilant when performing struct copies on the Alliant. As 3.0 is very old, I don't want any response from Alliant -- just wanted others to know.

Best, -Mike


Date: Thu, 27 Apr 89 11:09:22 EDT From: "Gary S. Moss" (VLD/VMB) <moss@BRL.MIL> To: Doug Gwyn <gwyn@BRL.MIL> cc: moss@BRL.MIL, acst@BRL.MIL Subject: Re: another fbed bug

< When zooming out, fbed should make sure that the displayed image < is moved if necessary so that no attempt is made to display beyond the < edge of the frame buffer. You can see this problem on VGR using the < Adage (Ikonas), by moving the cursor away from the center of the < image, zooming in several times, then zomming all the way back out. This is actually intentional. If you want the image framed WRT the window, hit <return>, otherwise it is useful to be able to have the border of the image in the center of the screen for "doctering" up pixels at the border, especially while zooming.

-moss


Date: Fri, 28 Apr 89 6:06:35 EDT From: Mike Muuss <mike@BRL.MIL> To: CAD@BRL.MIL cc: Barry@BRL.MIL, Sam@BRL.MIL, KGS@BRL.MIL Subject: New libpkg

Yesterday and today, Phil and I have been stalking a mysterious problem that turned out to be in libpkg!

The program fbframe, when run over a PKG connection to a remote framebuffer daemon, with the remote /dev/debug specified, caused a variable number of debugging messages to come back (either 2, or the expected 6).

It turns out that under conditions of very heavy "burst" traffic, such as this test generated, PKG messages could be processed out of order. This was due to always giving priority to reading new input over sending output. In fbframe, what happened was the packet representing fb_close() wound up getting processed *before* the last four fb_writerect() messages, and fb_close() cut off the connection.

This is a rather rare case, but PKG users, especially heavy-duty users like the ADDCOMPE folks, should retrieve the latest version from spark:/m/cad/libpkg for inspection. Now that the bug is known, I'd hate to learn of it bothering somebody else.

The new version is greatly revampped, and uses significantly fewer system calls (always 1/2 as many, usually 1/3 as many), and for a test case of the old single-pixel-I/O "fbgrid" program, this new bug fixed version of libpkg also provided five times greater performance (36 seconds -vs- 2:38) for the same high-traffic test.

The new version of libpkg will be included in the upcomming BRL-CAD release. For those of you waiting somewhat impatiently, it is finding and fixing the last little "nits" like this that are taking the extra time. Best, -Mike


Date: Sat, 29 Apr 89 9:58:57 EDT From: Mike Muuss <mike@BRL.MIL> To: CraySupport@BRL.MIL Subject: xmp adb/dbx & multi-tasking

I encountered a series of operand range errors in some multi-tasking code that I was debugging on the XMP this past evening. Unfortunately, debugging proved to be very difficult, as both ADB and DBX were unable to provide any useful information from the core dump. It seems that core dumps from multi-tasking processes cause the debuggers some trouble, even under UNICOS 4.

I will greatly appreciate all efforts made to find some technique for extracting information from these multi-tasking core dumps.

The immediate need has passed. Stumpped by the debuggers, I decided to display my core dump on a framebuffer. I was able to visually determine that the buffer of interest required a file offset of one byte, and a "pixel" offset of 400 scanlines. The command line was:

( gencolor -r1 0 ; cat core ) | pix-fb -w512 -n2048 -S 512 -y400

This permitted me to see which pixel was the last one to be added to the output buffer, and, as it turns out, thus guess the subroutine which was having the difficulty. Not the best way to debug, but it worked. Best, -Mike


Date: Sat, 29 Apr 89 0:42:41 EDT From: Mike Muuss <mike@BRL.MIL> To: Mike@BRL.MIL, Butler@BRL.MIL Subject: hd

Lee Butler's "hd" Hex Dump program has been added to the BRL-CAD "util" directory. -Mike



< mike@arl.mil >
Release notes for other versions of BRL-CAD