JKCalhoun a day ago

As a hobbyist, I'm playing with analog computer circuits right now. If you can match your curve with a similar voltage profile, a simple analog integrator (an op-amp with a capacitor connected in feedback) will also give you the area under the curve (also as a voltage of course).

Analog circuits (and op-amps just generally) are surprising cool. I know, kind of off on a tangent here but I have integration on the brain lately. You say "4 lines of Python", and I say "1 op-amp".)

  • tim333 an hour ago

    On op-amps I've got a personal theory that the cochlea amplifier in ear is basically an op amp providing negative feedback to prevent excessive amplitudes rather than the positive feedback mentioned in Wikipedia https://en.wikipedia.org/wiki/Cochlear_amplifier

  • addaon a day ago

    One of my favorite circuits from Korn & Korn [0] is an implementation of an arbitrary function of a single variable. Take an oscilloscope-style display tube. Put your input on the X axis as a deflection voltage. Close a feedback loop on the Y axis with a photodiode, and use the Y axis deflection voltage as your output. Cut your function of one variable out of cardboard and tape to the front of the tube.

    [0] https://www.amazon.com/Electronic-Analog-Computers-D-c/dp/B0...

    • bncndn0956 5 hours ago

      N-SPHERES

      https://youtu.be/BDERfRP2GI0

      N-SPHERES ist the most complex Oscilloscope Music work by Jerobeam Fenderson & Hansi3D and took six years to make.

      Since it is almost entirely created with parametric functions, it is possible to store only these functions in an executable program and let the program create the audio and video output on the fly. The storage space required for such a program is just a fraction of an audio or video file, so that it's possible to store the executables for the entire audiovisual EP all on one 3.5" 1.44MB floppy disk.The first 500 orders will receive the initial numbered edition with pen-plotted artwork

  • dreamcompiler a day ago

    Yep. This is also how you solve differential equations with analog computers. (You need to recast them as integral equations because real-world differentiators are not well-behaved, but it still works.)

    https://i4cy.com/analog_computing/

    • ogogmad 21 hours ago

      How does this compare to the Picard-Lindelof theorem and the technique of Picard iteration?

  • nakamoto_damacy 17 hours ago

    Speaking of Analog computation:

    A single artificial neuron could be implemented as:

    Weighted Sum

    Using a summing amplifier:

    net = Σ_i (Rf/Ri * xi)

    Where resistor ratios set the synaptic weights.

    Activation Function

    Common op-amp activation circuits:

    Saturating function: via op-amp with clipping diodes → approximated sigmoid

    Hard limiter: comparator behavior for step activation

    Tanh-like response: differential pair circuits

    Learning

    Early analog systems often lacked on-device learning; weights were manually set with potentiometers or stored using:

    Memristive elements (recent)

    Floating-gate MOSFETs

    Programmable resistor networks

bananaflag a day ago

> I hear that in electronics and quantum dynamics, there are sometimes integrals whose value is not a number, but a function, and knowing that function is important in order to know how the thing it’s modeling behaves in interactions with other things.

I'd be interested in this. So finding classical closed form solutions is the actual thing desired there?

  • morcus a day ago

    I think what the author was alluding to was the path integral formulation [of quantum mechanics] which was advanced in large part by Feynman.

    It's not that finding closed form solutions is what matters (I don't think most path integrals would have closed form solutions), but that the integration is done over the space of functions, not over Euclidian space (or a manifold in Euclidian space, etc...)

Animats a day ago

Good numerical integration is easy, because summing smooths out noise. Good numerical differentiation is hard, because noise is amplified.

Conversely, good symbolic integration is hard, because you can get stuck and have to try another route through a combinatoric maze. Good symbolic differentiation is easy, because just applying the next obvious operation usually converges.

Huh.

Mandatory XKCD: [1]

[1] https://xkcd.com/2117/

  • kkylin 21 hours ago

    That's exactly right. A couple more things:

    - Differenting a function composed of simpler pieces always "converges" (the process terminates). One just applies the chain rule. Among other things, this is why automatic differentiation is a thing.

    - If you have an analytic function (a function expressible locally as a power series), a surprisingly useful trick is to turn differentiation into integration via the Cauchy integral formula. Provided a good contour can be found, this gives a nice way to evaluate derivatives numerically.

messe a day ago

An integral trick I picked up from a lecturer at university: if you know the result has to be of the form ax^n for some a that's probably rational and some integer n but you're feeling really lazy and/or it's annoying to simplify (even for mathematica), just plug in a transcendental value for x like Zeta[3].

Then just divide by powers of that irrational number until you have something that looks rational. That'll give you a and n. It's more or less numerical dimensional analysis.

It's not that useful for complicated integrals, but when you're feeling lazy it's a fucking godsend to know what the answer should be before you've proven it.

EDIT: s/irrational/transcendental/

eig a day ago

What is the advantage of this Monte Carlo approach over a typical numerical integration method (like Runge-Kutta)?

  • kens a day ago

    I was wondering the same thing, but near the end, the article discusses using statistical techniques to determine the standard error. In other words, you can easily get an idea of the accuracy of the result, which is harder with typical numerical integration techniques.

    • ogogmad 21 hours ago

      Numerical integration using interval arithmetic gets you the same thing but in a completely rigorous way.

  • edschofield a day ago

    Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better.

    See, for example, https://ww3.math.ucla.edu/camreport/cam98-19.pdf

  • a-dub 21 hours ago

    as i understand: numerical methods -> smooth out noise from sampling/floating point error/etc for methods that are analytically inspired that are computationally efficient where monte carlo -> computationally expensive brute force random sampling where you can improve accuracy by throwing more compute at the problem.

  • MengerSponge a day ago

    Typical numerical methods are faster and way cheaper for the same level of accuracy in 1D, but it's trivial to integrate over a surface, volume, hypervolume, etc. with Monte Carlo methods.

    • adrianN a day ago

      At least if you can sample the relevant space reasonably accurately, otherwise it becomes really slow.

    • jgalt212 a day ago

      The writer would have been well served to discuss why he chose Monte Carlo over than summing up all the small trapezoids.

8bitsrule 19 hours ago

Cool how the computer versions seem to work well as long as re-normalization isn't involved.

ogogmad 21 hours ago

The usage of confidence intervals here reminds me of the clearest way to see that integration is a computable operator, to the same degree that a function like sin() or sqrt() is computable. It's true thanks to a natural combination of (i) interval arithmetic and (ii) the "Darboux integral" approach to defining integration. So, intervals can do magic.

ForOldHack 19 hours ago

I would bet on Feynman any day of the week. Numerical methods came up in 'Hidden Figures' and her solution was to use Euler to move from a elliptical orbit to a parabolic descent.