@PM_ME_VINTAGE_30S@lemmy.sdf.org
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

PM_ME_VINTAGE_30S

@PM_ME_VINTAGE_30S@lemmy.sdf.org

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. I mosty comment bricks of text with footnotes, so don’t be alarmed if you get one.

You posted something really worrying, are you okay?

No, but I’m not at risk of self-harm. I’m just waiting on the good times now.

Alt account of PM_ME_VINTAGE_30S@lemmy.sdf.org. Also if you’re reading this, it means that you can totally get around the limitations for display names and bio length by editing the JSON of your exported profile directly. Lol.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

How is everyone doing? How’s life been treating everyone?

Could be better, could be a lot worse. I’m actually ahead for once in school, but I have a career fair tomorrow that I’m extremely not looking forward to. Kinda having an “unmotivated” day right now, but it’ll pass.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

Unfathomably disappointed that the caption doesn’t say masturbation station.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

My favorite sin is cos(2πft) = sin(2πft+π/2), because it’s easy to work with (because the frequency is exactly f Hz if t is in seconds).

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

So in electrical engineering and audio applications, we often decompose a signal into a (possibly infinite [1]) weighted sum of sines and cosines. Because each sine has a frequency [2], each weight roughly represents how much of each frequency is in the decomposed signal. Similarly, if such a decomposition is performed on certain test signals, you can characterize how a system will act on any signals.

In audio applications, this has a particularly intuitive interpretation: this characterizes how bright or dull, bassy or trebly, “sounds good” or “sounds like shit”, a sound is, depending on what frequencies are present. For a system, it describes what frequencies it emphasizes and can be used as a figure of merit to decide if it is fit for purpose.

Also, because in audio applications we need to do things in nearly real-time, and because we can safely throw out all frequencies above 20kHz (we can’t hear them!), all the infinities and most of the calculus drops out in favor of matrix algebra. This is why if you use an equalizer plugin (like ReaEQ or Izotope Ozone) you can see a(n approximate) chart of the frequencies that are sounding at any given time (technically in a window of time that is tiny compared to the progression of the music) and look visually for anything that shouldn’t be there.

For example, even if your speakers can’t reproduce 60Hz, you can check if a track has 60Hz hum from the power system by looking for a spike in the frequency response at 60Hz that stays up the whole time. To fix it, you would put in a notch filter that “throws out” 60Hz.

I prefer cos to sin because if you solve for both in terms of complex exponentials, you end up with real weights in the answer. Practically [3], cos(t) = 1/2 * ( e^jt + e^-jt ) and sin(t) = 1/(2j) * ( e^jt – e^-jt ), the latter of which is annoying. I prefer the form cos(2πft) because you can measure a real wave directly in terms of its frequency (=1/period). Since the post “asked for” a sin(e), and sin(x+π/2) = cos(x) for any x, I plugged in my “pet” argument to get sin(2πft+π/2).

Also on a more advanced note, if you derive the Fourier transform using e^j2πft as a basis, you end up with a unitary (read: mathematically convenient) transform. Unfortunately, using e^jωt where ω=2πf actually doesn’t yield a unitary transform, but e^jωt still constitutes an orthogonal basis so it is still just as used in engineering applications.

I literally have dozens of books about sin(e)s and how to use them to get what I want (Fourier analysis). It’s such a deep and interesting topic that I recommend everyone look into at some point.

[1] In the real world, because there is only finite computer storage, any infinities are practically truncated or otherwise approximated away. Still, the full theory is useful for understanding and possibly deriving your way out of some complex calculation.

[2] The input to the sine function sin(2πft) must not have units, i.e. it needs to just be a number. Therefore, if the independent variable t is in terms of some unit, the frequency is in terms of 1/unit. For example, if the input is in seconds, the frequency is in 1/s = Hz.

[3] j = sqrt(-1). Yes, j is for jmaginary. Electrical engineers use j instead of i for the imaginary unit. Also, all arguments to sin() and cos() are in radians, not degrees. π radians = 180°, π/2 radians = 90°, and 2π radians = 360° = 1 rotation.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

What is the difference between these two?

Truncation is a primitive way to approximate something. In any case, truncating a process means to “cut off” the process after some point.

For example, take the number π ≈ 3.141592653589793… ad infinitum. If you need to do a calculation with π, you typically round it to some number of significant figures. For example, 3.1416 is π rounded to 5 significant figures. Truncation is even more primitive; for example, 3.1415 is π truncated to 5 significant figures by simply “cutting off” the rest.

So Fourier analysis in its most general form requires you to compute an uncountably infinite (one element per real number) number of points to represent a frequency response. If your signal has a start time and an end time, you can get that down to a countable infinity (one element per integer) of points. This is equivalent to truncating the signal in time. Audio signals typically have a beginning and end, so theoretically no work needs to be done. However, for real-time processing, we typically break up the signal into tiny windows and analyze each of these windows piece by piece.

Now you can have a computer start your task, but it will never finish it. It won’t even finish processing one single window. To do that, you have to “ban” any frequencies above a fixed maximum from your analysis. This is equivalent to truncating the signal in its frequency response. For audio signals, this means that if there is anything above 22050Hz (or half your sample frequency), you throw it out before you process it any further. After this, you get an algorithm that will terminate in a finite number of steps.

Truncating the frequency response results in Gibb’s oscillations (audible ringing) if done “irresponsibly”. There are other methods (e.g. Cesaro summation) that can be used to reduce these oscillations, but it’s kind of a technical topic. Practically, we use filters that “slowly” begin to throw out frequencies lower than the desired cutoff. These “gradual” filters don’t cause ringing.

I can only imagine the infinite becoming the limit of the computer or maybe just a very very high number

The infinite would be how many measured values are needed to recover the signal and its frequency response without losing any information. You need to iterate over each of these when doing Fourier analysis on a computer.

(cuz I’m hoping this is in-depth? this is normally when u sqy “actually, that’s not in-depth at all” and fill my DMs with an ocean of math LOL)

So I went to recording school several years before I went to engineering school. We were taught about frequency response and how to manipulate audio signals with canned software, but without any of the mathematics behind it. This is about as in-depth as that. Engineering school went into more depth, but considering that I want to actually write audio software, eventually with my own hand-rolled math, I really feel a need to understand Fourier analysis at a depth that my even engineering education alone has been unable to provide.

So there is a much more in-depth explanation (several books worth at least), but I’ll spare you the details ☺️. But if you ever want those details, feel free to ask for textbook recommendations.

thank god it’s not this complicated cuz that would be too much for my silly head >u<

I used to think math was too complicated for me to learn too. That’s why I waited so long to go to engineering school after high school. And it is complicated. But IMO when I approached math like I approached my creative pursuits, i.e. something I planned on retaining for the rest my life, that is when it clicked for me. I have a terrible memory for random stuff that makes no sense and has no pattern. However, if I understand why something is the way it is, and what motivated mathematicians to come up with some new abstraction, it all of a sudden becomes a lot easier to internalize.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

I guess there were no pedals in the training data 😆.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

PM_me_vintage_30s is still a request, lol.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

Username checks out

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

Back on R*ddit (with the same username), I actually got a PM of someone’s guitar cabinet speakers. None of them were Vintage 30s, but some other guitar speakers that they mounted in their speaker cabinet. It was very much appreciated.

Although I prefer Vintage 30s, I strongly encourage any and/or all of you to PM me any guitar speakers you have. But, I still live in hope that someday, someone might PM me Vintage 30s…

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

In the right coordinate system it becomes a circle…

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

I created this on my phone in MATLAB. You can probably do this in Octave with similar or the same code.

Figure 2024-01-14 17_18_24~2

First, I downloaded the image from Lemmy, then uploaded it into my MATLAB app. I renamed the image to image.jpg, then ran the following code:

image=imread(“image.jpg”) imagesc(log10(abs( fftshift(fft2(image)) )))

fft2 applies a 2D Fast Fourier transform to the image, which creates a complex (as in complex numbers) image. abs takes the magnitude of the complex image elementwise. log10 scales the result for display.

Then I downloaded the image from the MATLAB app, went into the Photos app and (badly) cropped out the white border.

Despite how dramatically different it looks, it actually contains the same [1] information as the original image. Said differently, you can actually go back to the original with the inverse functions, specifically by undoing the logarithm and applying the inverse FFT.

[1] Almost. (1). There will be border problems potentially caused by me sloppily cropping some pixels out of the image. (2). It looks like MATLAB resized the image when rendering the figure. However, if I actually saved the matrix (raw image) rather than the figure, then it would be the correct size. (3) (Thank you to @itslilith for pointing this out.) You need the phase information to reconstruct the original signal, which I (intentionally) threw out (to get a real image) when I took the absolute value but then completely forgot about it.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

So I took your image and ruined my MATLAB account used the most normal part of your totally normal cow as a 3D [1] cockvolution convolution kernel. So in some sense, I dragged the red and purple part all across your image and added up the results. Here’s the result:

Figure 2024-01-15 14_46_27

Here’s the MATLAB code:

normal_image = imread(“totally_normal_image.png”);

feature=normal_image(272:350,205:269,:);

feature_expansion = padarray(feature,[0,ceil((79-65)/2),0],‘replicate’);

for m = 1:1:3


<span style="color:#323232;">new_normal_image(:,:,m) = conv2(normal_image(:,:,m),feature_expansion(:,:,m));
</span><span style="color:#323232;">
</span><span style="color:#323232;">new_normal_image(:,:,m) = new_normal_image(:,:,m)/max(max(new_normal_image(:,:,m))); 
</span>

end

imshow(new_normal_image)

[1] The original image was practically grayscale, so only a 2D convolution is required, i.e. over 2 spatial dimensions. Since you added color, it adds a extra dimension, one per color channel. Which makes it more annoying to work with in MATLAB. I mean, I could have just dumped everything into grayscale, but I need practice with processing color images anyways.

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

But he’s getting charged with a crime, so isn’t it illegal shitposting?

PM_ME_VINTAGE_30S,
@PM_ME_VINTAGE_30S@lemmy.sdf.org avatar

Hi! What’s your favorite videogame?

Fallout New Vegas, followed by GTA V. I really liked Baldur’s Gate 3 so far, but I’m probably going to play a lot more sporadically during the semester.

Do you have any subject you’re super interested in and could talk about for 2 hours with no prep?

Audio engineering, music, control theory, and math, specifically linear systems, integral transforms, and calculus in general.

What are your thoughts on the Fediverse?

We got a good thing going here. I’m a bit concerned about the Meta federation stuff, more so for microblogging services than the threadiverse, but I think we’ll tank it. I really hope that Peertube takes off.

What are your hobbies?

Guitar, audio mixing, audio programming 💀, video games, watching gaming videos, engaging with people on Lemmy, cooking, and reading math and physics textbooks.

Yeah I’m into weird shit, but not the fun kind of weird.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines