Not signed in (Sign In)

Welcome, Guest

Want to take part? Sign in or apply for an account below

Vanilla 1.1.4 is a product of Lussumo. More Information: Documentation, Community Support.

    • CommentAuthorrob
    • CommentTimeJun 4th 2008
     
    I'm just starting out with HighC, but I'm not new to music. I thought I'd share a couple of my thoughts looking in at HighC from a beginner perspective, albeit as one musically trained.

    All music, regardless of genre, makes use of tension and release as an organizing principle for sound. The musical devices you study, all have a role in creating or releasing tension. Here's a partial list of musical devices and how they relate to this discussion:
    pitch: upward motion creates tension, downward motion resolves it
    texture: density increases tension, sparsity resolves
    interval: dissonance creates tension, consonance resolves tension
    tempo: acceleration creates tension, slowing resolves it
    intonation: bending away from the nominal pitch increases tension, returning resolves it
    harmony: there’s a very sophisticated system of creating and resolving tension in western harmony (you can buy books and take college courses on it)
    syncopation: playing off the beat creates tension, on the beat resolves it
    beat division: more divisions per beat creates tension, fewer resolves tension
    dynamics: louder creates tension, softer releases tension (this is debatable–you could make a decent case that moving away from the center increases tension, while moving toward decreases. Either way, change in dynamics can create and resolve tension)
    articulation: shorter (stacatto) creates tension, longer (legato) releases
    repetition: repetition builds tension, variation resolves it
    ornamentation: ornaments increase tension, simplicity resolves it

    So one way (a productive way, in my view) to view a composition is as a shape that is a graph of tension versus time. As a listener, you can graph this shape even without knowing a lot about music. If you're a composer, you'll have a heightened awareness of how the musical devices at your disposal contribute to the tension-vs.-time curve.

    Using graphic notation should make it easier to depict the notion of a tension-time curve. The pitch and time dimensions are pretty plainly in view. density/sparsity is cleanly depicted. HighC's transparency depiction of dynamics is easy to track visually. Repetition shows up quite clearly.

    Other dimensions, like consonance/dissonance, don't map as well into the graphic notation. To be completely fair--they don't really jump off the page at you in standard notation either. Nothing about the interval between F-B is visually that different than the interval from F-C, yet they are highly dissonant and highly consonant respectively.

    Having the indications of tension and release be more plainly visible I think should lead to a heightened awareness in the composer of tension and release as an organizing principle of music.
    • CommentAuthoradmin
    • CommentTimeJun 4th 2008 edited
     
    Thanks for this enlightening comment Rob,

    I've wanted for a long time to set up a page where I'd explain my choices for mapping auditory features to graphic attributes, but I see you got the idea through with great precision.

    There are lot of improvements that I can make to better 'reveal' auditory features visually. Some are easy to improve: for instance, spectral content is currently mapped to arbitrary colors. I'd like to encode the brillance of a sound as hue, and its 'harmonicity' as color saturation: a white noise would be grey/black, pink noise->unsaturated pink, red noise -> unsaturated red, blue noise -> unsaturated blue, a sound with lots of spectral energy in the high range would tend towards blueish, while a pure sine would be reddish. Harmonic sounds would be highly saturated, noise bands would be desaturated. I've been thinking about a few other improvements that are easy to conceive (but long to program: HighC is not my day job...).

    I'm gonna think through deeper this notion of reflecting tension/release better in the visual encoding... requires some thought.

    As for revealing the consonnant/dissonant relationship of a group of sound, that, I agree, is a strong shortcoming of the chosen paradigm. If there's a solution, it does not lie in mapping pitch to a single linear dimension. I've being thinking of using p-adic distance (of base 2) to map sounds on the vertical axis: the more a sound is dissonant from a reference sound, the higher it would be on the score. Well, hard to describe in a few words... And anyhow, still not a satisfying solution.

    Perhaps I could augment the score with visual marks that somehow "tie" or "untie" consonnant and dissonant sounds (based on the GCD of their relative frequencies, or Helmhotz' scale of dissonance).

    Anyhow, thanks again for this comment, I will ponder it over the coming weeks again.
    • CommentAuthoradmin
    • CommentTimeJun 5th 2008
     
    One think to add: along the time axis, the same issue arises to exhibit rhythmic tension/release better: a sequence of regular "on tempo" beats is hard to distinguish from the same sequence of beats slight shifted randomly before or after the beat.
    The tension effect is dramatic and hearing experience radically different, but the visual difference will be tiny.

    Whatever solution comes up to exhibit harmonic reinforcement/cancelation along the pitch axis will need to be applied as well on the time axis.
    • CommentAuthorrob
    • CommentTimeJun 5th 2008
     
    My perspective is that it really doesn't need to explicitly depict those things, although it would be nice.

    There is a decent parallel to an architectural drawing. There's no single view that depicts every necessary detail of a structure. But there are elevation and floor plan and mechanical views that make specific details clear. To realize the structure, you need all the details, but putting them all into one drawing would obscure information instead of revealing it.

    Composers have to create structures, then figure out ways to convey/depict them for the performers. In the HighC case, you have more of a direct connection to the performance since the notation is pretty tightly coupled with the rendered performance. That element of how to "trick" the performer into doing something outside his experience is taken care of by the program. In some senses, notation is a compromise between tricking the performer into doing the intended action, and depicting the structure of the piece for purposes of analysis.

    In film scoring today, I think it's pretty common for the notation to be an afterthought, handled by transcriptionists who convert the midi files into a score.

    I think the real value of a tool like this is that it makes the composer confront notions of structure, and address the abstraction rather than getting absorbed in the concrete details of the piece like what VST instrument and what patch and what velocity on the midi note on message that occurs at 34:03.004 or the pan position of the ride cymbal, etc.

    It's easy to get mired in those details, and wind up with no clarity about the structure of your music. I suppose on the other hand that if you have no notion of structure to begin with, using HighC won't automatically make your work more structured. But if you'll look, you'll find that the structures that you have are perhaps easier to get in focus. The UI seems to do a good job of pushing the details below the surface in a way that makes structure more apparent, at least compared with conventional notation. Could we envision some ideal system, where everything is simultaneously revealed, yet all structural features are apparent? Maybe.

    But the composer has to context shift frequently and fluidly between different abstractions anyway--you may conceive a concerto, but you have to write it down a note at a time. Abstraction gives you leverage. Composers (and performers) have to manipulate mental representations of musical structures. The rhythmic examples you gave are good indication that not every musical abstraction can be conveniently parameterized and then subsequently visualized. Some abstractions have contextual modifiers--dividing a unit of time into four equal parts is low-tension in a duple meter context, but potentially very high-tension in a triple meter.

    I guess my real point is that graphical notation forces the abstraction level upward for a composer, and that's good. Thinking about how music is organized is productive, wholesome, and edifying.
    • CommentAuthorthbb
    • CommentTimeJun 12th 2008 edited
     
    The architectural drawing comparison is a great insight. In fact, it is completely in line with my main job (see the ILOG Discovery information visualization tool ): to get a good and workable understanding of complex data, one needs multiple perspectives.

    Now, that pushes the agenda for HighC for a tad more years of development. I suppose you'd rather see more trivial things solved first, such as the "pitch-linear" vs. "frequency-linear" issue...

    But please keep contributing on these perspective, even though don't expect too much progress before a year or 2...