Embracing randomness to solve problems

It’s easy to think of problem-solving as a process where we follow a defined algorithm step-by-step to get the same answer each time. While this works for many cases, it often doesn’t work for more complex problems – how would you know what variables to use, and how would you manipulate them to get the answers you want? Questions like these are present all throughout computer science, mathematics, physics, biology, chemistry, engineering, psychology, economics, sociology… In many cases, it is easier and less computationally intensive to allow for some degree of randomness. This post is not meant to teach these concepts, but only to serve as a little taste of them so you can explore more about them if you’re interested!

For example, in finance, there can be many variables that influence a certain outcome, such as evaluating how well a portfolio does. Many outcomes under different values for different parameters can be simulated in order to evaluate how the portfolio will perform in the future, which could inform decisions about changes that could be made to improve its performance. This is known as a Monte Carlo simulation [1].

Another use of random sampling is in numeric integration, especially for multiple integrals. This is known as Monte Carlo integration [2]. These integrals have many uses in the physical sciences, computer science, and other domains. Many integrals are difficult to compute analytically, but in some scenarios, an exact answer is not needed – only one with enough precision for the current application. This can be accomplished by sampling random points within the given bounds, and with enough samples, the answer from the random sampling will approach the true answer. A simplified example would be to estimate the integral of some function over an interval, \int_{a}^{b}f(x)dx. Knowing that the average value of the function is related to the integral and the interval with f_{avg} = \frac{\int_{a}^{b}f(x)dx}{T}, you could find the average value of the function by taking multiple samples and rearranging to find the integral.

Yet even more problems that include randomness include optimization problems, which have a whole variety of applications across different fields. One such method is simulated annealing [3]. Imagine a function with lots of peaks and valleys that represents a value from a problem, and you want to find the minimum (or the maximum) value within the domain. Depending on the problem, many exact algorithms will actually fail to find the best/lowest (global) minimum within the search space, and will instead only find local minima. This can happen with search algorithms that only keep a result if it’s better than the previous one, such as hill-climbing algorithms. With simulated annealing algorithms, you can randomly search around neighbouring points and allow for a worse result at first, and then gradually “cool” down the tolerance for worse results until it gets to the best peak.

Other algorithms that use randomness include evolutionary algorithms [4]. These operate by varying different parameters in the “individuals” that it first generates, then selecting the individuals that perform the best at the problem in question. The characteristics of these “fittest” individuals can be “bred”, using genetically-inspired events such as “crossing over” and “mutation” to change the parameters in the “offspring”. This continues for many generations in order to find optimal solutions to a problem. These algorithms can be applied to artificial neural networks as part of the training and learning process. These algorithms have many applications, including facial identification, cybersecurity, diagnosing illnesses, machine translation, and even playing video games.

This was just a tiny overview of the enormous role randomness has in computing – some of the details were left out to make it more digestible. If you’re interested, you can read more on any of the topics!

Sources:
1. McLeish DL. Monte Carlo Simulation and Finance. John Wiley & Sons; 2011.
2. Weinzierl S. Introduction to Monte Carlo methods. arXiv:hep-ph/0006269. Published online June 23, 2000. Accessed November 10, 2021. http://arxiv.org/abs/hep-ph/0006269
3. Nikolaev AG, Jacobson SH. Simulated Annealing. In: Gendreau M, Potvin JY, eds. Handbook of Metaheuristics. International Series in Operations Research & Management Science. Springer US; 2010:1-39. doi:10.1007/978-1-4419-1665-5_1
4. Bäck T, Schwefel HP. An Overview of Evolutionary Algorithms for Parameter Optimization. Evolutionary Computation. 1993;1(1):1-23. doi:10.1162/evco.1993.1.1.1

 

Leave a Reply

Your email address will not be published. Required fields are marked *