Major rework to improve code quality and add automation checks (#805)

* delete secant method - it is identical to regula falsi

* document + improvize root finding algorithms

* attempt to document gaussian elimination

* added file brief

* commented doxygen-mainpage, added files-list link

* corrected files list link path

* files-list link correction - this time works :)

* document successive approximations

* cleaner equation

* updating DIRECTORY.md

* documented kmp string search

* document brute force string search

* document rabin-karp string search

* fixed mainpage readme

* doxygen v1.8.18 will suppress out the #minipage in the markdown

* cpplint correction for header guard style

* github action to auto format source code per cpplint standard

* updated setting to add 1 space before `private` and `public` keywords

* auto rename files and auto format code

* added missing "run" for step

* corrected asignmemt operation

* fixed trim and assign syntax

* added git move for renaming bad filenames

* added missing pipe for trim

* added missing space

* use old and new fnames

* store old fname using echo

* move files only if there is a change in filename

* put old filenames in quotes

* use double quote for old filename

* escape double quotes

* remove old_fname

* try escape characters and echo"

* add file-type to find

* cleanup echo

* ensure all trim variables are also in quotes

* try escape -quote again

* remove second escpe quote

* use single quote for first check

* use carets instead of quotes

* put variables in brackets

* remove -e from echo

* add debug echos

* try print0 flag

* find command with while instead of for-loop

* find command using IFS instead

* 🎉 IFS fix worked - escaped quotes for git mv

* protetc each word in git mv ..

* filename exists in lower cases - renamed

* 🎉 git push enabled

* updating DIRECTORY.md

* git pull & then push

* formatting filenames d7af6fdc8c

* formatting source-code for d7af6fdc8c

* remove allman break before braces

* updating DIRECTORY.md

* added missing comma lost in previous commit

* orchestrate all workflows

* fix yml indentation

* force push format changes, add title to DIRECTORY.md

* pull before proceeding

* reorganize pull commands

* use master branches for actions

* rename .cc files to .cpp

* added class destructor to clean up dynamic memory allocation

* rename to awesome workflow

* commented whole repo cpplint - added modified files lint check

* removed need for cpplint

* attempt to use actions/checkout@master

* temporary: no dependency on cpplint

* formatting filenames 153fb7b8a5

* formatting source-code for 153fb7b8a5

* updating DIRECTORY.md

* fix diff filename

* added comments to the code

* added test case

* formatting source-code for a850308fba

* updating DIRECTORY.md

* added machine learning folder

* added adaline algorithm

* updating DIRECTORY.md

* fixed issue [LWG2192](https://cplusplus.github.io/LWG/issue2192) for std::abs on MacOS

* add cmath for same bug: [LWG2192](https://cplusplus.github.io/LWG/issue2192) for std::abs on MacOS

* formatting source-code for f8925e4822

* use STL's inner_product

* formatting source-code for f94a330594

* added range comments

* define activation function

* use equal initial weights

* change test2 function to predict

* activation function not friend

* previous commit correction

* added option for predict function to return value before applying activation function as optional argument

* added test case to classify points lying within a sphere

* improve documentation for adaline

* formatting source-code for 15ec4c3aba

* added cmake to geometry folder

* added algorithm include for std::max

* add namespace - machine_learning

* add namespace - statistics

* add namespace - sorting

* added sorting algos to namespace sorting

* added namespace string_search

* formatting source-code for fd69530515

* added documentation to string_search namespace

* feat: Add BFS and DFS algorithms to check for cycle in a directed graph

* Remove const references for input of simple types

Reason: overhead on access

* fix bad code

sorry for force push

* Use pointer instead of the non-const reference

because apparently google says so.

* Remove a useless and possibly bad Graph constuctor overload

* Explicitely specify type of vector during graph instantiation

* updating DIRECTORY.md

* find openMP before adding subdirectories

* added kohonen self organizing map

* updating DIRECTORY.md

* remove older files and folders from gh-pages before adding new files

* remove chronos library due to inacceptability by cpplint

* use c++ specific static_cast instead

* initialize radom number generator

* updated image links with those from CPP repository

* rename computer.... folder to numerical methods

* added durand kerner method for root computation for arbitrarily large polynomials

* fixed additional comma

* fix cpplint errors

* updating DIRECTORY.md

* convert to function module

* update documentation

* move openmp to main loop

* added two test cases

* use INT16_MAX

* remove return statement from omp-for loop and use "break"

* run tests when no input is provided and skip tests when input polynomial is provided

* while loop cannot have break - replaced with continue and check is present in the main while condition

* (1) break while loop (2) skip runs on break_loop instead of hard-break

* add documentation images

* use long double for errors and tolerance checks

* make iterator variable i local to threads

* add critical secions to omp threads

* bugfix: move file writing outside of the parallel loop
othersie, there is no gurantee of the order of roots written to file

* rename folder to data_structures

* updating DIRECTORY.md

* fix ambiguous symbol `size`

* add data_structures to cmake

* docs: enable tree view, add timestamp in footer, try clang assistaed parsing

* doxygen - open links in external window

* remove invalid parameter from function docs

* use HTML5 img tag to resize images

* move file to proper folder

* fix documentations and cpplint

* formatting source-code for aacaf9828c

* updating DIRECTORY.md

* cpplint: add braces for multiple statement if

* add explicit link to badges

* remove  duplicate line

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* remove namespace indentation

* remove file associations in settings

* add author name

* enable cmake in subfolders of data_structures

* create and link object file

* cpp lint fixes and instantiate template classes

* cpp lint fixes and instantiate template classes

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* cpplint - ignore `build/include`

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* disable redundant gcc compilation in cpplint workflow

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* template header files contain function codes as well and removed redundant subfolders

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* updating DIRECTORY.md

* remove semicolons after functions in a class

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* cpplint header guard style

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* remove semilon

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* added LU decomposition algorithm

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* added QR decomposition algorithm

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* use QR decomposition to find eigen values

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* updating DIRECTORY.md

* use std::rand for thread safety

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* move srand to main()

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* cpplint braces correction

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* updated eigen value documentation

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* fix matrix shift doc

Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com>

* rename CONTRIBUTION.md to CONTRIBUTING.md #836

* remove 'sort alphabetical order' check

* added documentation check

* remove extra paranthesis

* added gitpod

* added gitpod link from README

* attempt to add vscode gitpod extensions

* update gitpod extensions

* add gitpod extensions cmake-tools and git-graph

* remove gitpod init and add commands

* use init to one time install doxygen, graphviz, cpplint

* use gitpod dockerfile

* add ninja build system to docker

* remove configure task

* add github prebuild specs to gitpod

* disable gitpod addcommit

* update documentation for kohonen_som

* added ode solve using forward euler method

* added mid-point euler ode solver

* fixed itegration step equation

* added semi-implicit euler ODE solver

* updating DIRECTORY.md

* fix cpplint issues - lines 117 and 124

* added documentation to ode group

* corrected semi-implicit euler function

* updated docs and test cases better structure

* replace `free` with `delete` operator

* formatting source-code for f55ab50cf2

* updating DIRECTORY.md

* main function must return

* added machine learning group

* added kohonen som topology algorithm

* fix graph image path

* updating DIRECTORY.md

* fix braces

* use snprintf instead of sprintf

* use static_cast

* hardcode character buffer size

* fix machine learning groups in documentation

* fix missing namespace function

* replace kvedala fork references to TheAlgorithms

* fix bug in counting_sort

Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com>
Co-authored-by: Anmol3299 <mittalanmol22@gmail.com>
This commit is contained in:
Krishna Vedala
2020-06-19 12:04:56 -04:00
committed by GitHub
parent 70a2aeedc3
commit aaa08b0150
313 changed files with 49332 additions and 9833 deletions

View File

@@ -0,0 +1,18 @@
# If necessary, use the RELATIVE flag, otherwise each source file may be listed
# with full pathname. RELATIVE may makes it easier to extract an executable name
# automatically.
file( GLOB APP_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} *.cpp )
# file( GLOB APP_SOURCES ${CMAKE_SOURCE_DIR}/*.c )
# AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} APP_SOURCES)
foreach( testsourcefile ${APP_SOURCES} )
# I used a simple string replace, to cut off .cpp.
string( REPLACE ".cpp" "" testname ${testsourcefile} )
add_executable( ${testname} ${testsourcefile} )
set_target_properties(${testname} PROPERTIES LINKER_LANGUAGE CXX)
if(OpenMP_CXX_FOUND)
target_link_libraries(${testname} OpenMP::OpenMP_CXX)
endif()
install(TARGETS ${testname} DESTINATION "bin/numerical_methods")
endforeach( testsourcefile ${APP_SOURCES} )

View File

@@ -0,0 +1,75 @@
/**
* \file
* \brief Solve the equation \f$f(x)=0\f$ using [bisection
* method](https://en.wikipedia.org/wiki/Bisection_method)
*
* Given two points \f$a\f$ and \f$b\f$ such that \f$f(a)<0\f$ and
* \f$f(b)>0\f$, then the \f$(i+1)^\text{th}\f$ approximation is given by: \f[
* x_{i+1} = \frac{a_i+b_i}{2}
* \f]
* For the next iteration, the interval is selected
* as: \f$[a,x]\f$ if \f$x>0\f$ or \f$[x,b]\f$ if \f$x<0\f$. The Process is
* continued till a close enough approximation is achieved.
*
* \see newton_raphson_method.cpp, false_position.cpp, secant_method.cpp
*/
#include <cmath>
#include <iostream>
#include <limits>
#define EPSILON \
1e-6 // std::numeric_limits<double>::epsilon() ///< system accuracy limit
#define MAX_ITERATIONS 50000 ///< Maximum number of iterations to check
/** define \f$f(x)\f$ to find root for
*/
static double eq(double i) {
return (std::pow(i, 3) - (4 * i) - 9); // original equation
}
/** get the sign of any given number */
template <typename T>
int sgn(T val) {
return (T(0) < val) - (val < T(0));
}
/** main function */
int main() {
double a = -1, b = 1, x, z;
int i;
// loop to find initial intervals a, b
for (int i = 0; i < MAX_ITERATIONS; i++) {
z = eq(a);
x = eq(b);
if (sgn(z) == sgn(x)) { // same signs, increase interval
b++;
a--;
} else { // if opposite signs, we got our interval
break;
}
}
std::cout << "\nFirst initial: " << a;
std::cout << "\nSecond initial: " << b;
// start iterations
for (i = 0; i < MAX_ITERATIONS; i++) {
x = (a + b) / 2;
z = eq(x);
std::cout << "\n\nz: " << z << "\t[" << a << " , " << b
<< " | Bisect: " << x << "]";
if (z < 0) {
a = x;
} else {
b = x;
}
if (std::abs(z) < EPSILON) // stoping criteria
break;
}
std::cout << "\n\nRoot: " << x << "\t\tSteps: " << i << std::endl;
return 0;
}

View File

@@ -0,0 +1,339 @@
/**
* @file
* \brief Compute all possible approximate roots of any given polynomial using
* [Durand Kerner
* algorithm](https://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method)
* \author [Krishna Vedala](https://github.com/kvedala)
*
* Test the algorithm online:
* https://gist.github.com/kvedala/27f1b0b6502af935f6917673ec43bcd7
*
* Try the highly unstable Wilkinson's polynomial:
* ```
* ./numerical_methods/durand_kerner_roots 1 -210 20615 -1256850 53327946
* -1672280820 40171771630 -756111184500 11310276995381 -135585182899530
* 1307535010540395 -10142299865511450 63030812099294896 -311333643161390640
* 1206647803780373360 -3599979517947607200 8037811822645051776
* -12870931245150988800 13803759753640704000 -8752948036761600000
* 2432902008176640000
* ```
* Sample implementation results to compute approximate roots of the equation
* \f$x^4-1=0\f$:\n
* <img
* src="https://raw.githubusercontent.com/TheAlgorithms/C-Plus-Plus/docs/images/numerical_methods/durand_kerner_error.svg"
* width="400" alt="Error evolution during root approximations computed every
* iteration."/> <img
* src="https://raw.githubusercontent.com/TheAlgorithms/C-Plus-Plus/docs/images/numerical_methods/durand_kerner_roots.svg"
* width="400" alt="Roots evolution - shows the initial approximation of the
* roots and their convergence to a final approximation along with the iterative
* approximations" />
*/
#include <algorithm>
#include <cassert>
#include <cmath>
#include <complex>
#include <cstdlib>
#include <ctime>
#include <fstream>
#include <iostream>
#include <valarray>
#ifdef _OPENMP
#include <omp.h>
#endif
#define ACCURACY 1e-10 /**< maximum accuracy limit */
/**
* Evaluate the value of a polynomial with given coefficients
* \param[in] coeffs coefficients of the polynomial
* \param[in] x point at which to evaluate the polynomial
* \returns \f$f(x)\f$
**/
std::complex<double> poly_function(const std::valarray<double> &coeffs,
std::complex<double> x) {
double real = 0.f, imag = 0.f;
int n;
// #ifdef _OPENMP
// #pragma omp target teams distribute reduction(+ : real, imag)
// #endif
for (n = 0; n < coeffs.size(); n++) {
std::complex<double> tmp =
coeffs[n] * std::pow(x, coeffs.size() - n - 1);
real += tmp.real();
imag += tmp.imag();
}
return std::complex<double>(real, imag);
}
/**
* create a textual form of complex number
* \param[in] x point at which to evaluate the polynomial
* \returns pointer to converted string
*/
const char *complex_str(const std::complex<double> &x) {
#define MAX_BUFF_SIZE 50
static char msg[MAX_BUFF_SIZE];
std::snprintf(msg, MAX_BUFF_SIZE, "% 7.04g%+7.04gj", x.real(), x.imag());
return msg;
}
/**
* check for termination condition
* \param[in] delta point at which to evaluate the polynomial
* \returns `false` if termination not reached
* \returns `true` if termination reached
*/
bool check_termination(long double delta) {
static long double past_delta = INFINITY;
if (std::abs(past_delta - delta) <= ACCURACY || delta < ACCURACY)
return true;
past_delta = delta;
return false;
}
/**
* Implements Durand Kerner iterative algorithm to compute all roots of a
* polynomial.
*
* \param[in] coeffs coefficients of the polynomial
* \param[out] roots the computed roots of the polynomial
* \param[in] write_log flag whether to save the log file (default = `false`)
* \returns pair of values - number of iterations taken and final accuracy
* achieved
*/
std::pair<uint32_t, double> durand_kerner_algo(
const std::valarray<double> &coeffs,
std::valarray<std::complex<double>> *roots, bool write_log = false) {
long double tol_condition = 1;
uint32_t iter = 0;
int n;
std::ofstream log_file;
if (write_log) {
/*
* store intermediate values to a CSV file
*/
log_file.open("durand_kerner.log.csv");
if (!log_file.is_open()) {
perror("Unable to create a storage log file!");
std::exit(EXIT_FAILURE);
}
log_file << "iter#,";
for (n = 0; n < roots->size(); n++) log_file << "root_" << n << ",";
log_file << "avg. correction";
log_file << "\n0,";
for (n = 0; n < roots->size(); n++)
log_file << complex_str((*roots)[n]) << ",";
}
bool break_loop = false;
while (!check_termination(tol_condition) && iter < INT16_MAX &&
!break_loop) {
tol_condition = 0;
iter++;
break_loop = false;
if (log_file.is_open())
log_file << "\n" << iter << ",";
#ifdef _OPENMP
#pragma omp parallel for shared(break_loop, tol_condition)
#endif
for (n = 0; n < roots->size(); n++) {
if (break_loop)
continue;
std::complex<double> numerator, denominator;
numerator = poly_function(coeffs, (*roots)[n]);
denominator = 1.0;
for (int i = 0; i < roots->size(); i++)
if (i != n)
denominator *= (*roots)[n] - (*roots)[i];
std::complex<long double> delta = numerator / denominator;
if (std::isnan(std::abs(delta)) || std::isinf(std::abs(delta))) {
std::cerr << "\n\nOverflow/underrun error - got value = "
<< std::abs(delta) << "\n";
// return std::pair<uint32_t, double>(iter, tol_condition);
break_loop = true;
}
(*roots)[n] -= delta;
#ifdef _OPENMP
#pragma omp critical
#endif
tol_condition = std::max(tol_condition, std::abs(std::abs(delta)));
}
// tol_condition /= (degree - 1);
if (break_loop)
break;
if (log_file.is_open()) {
for (n = 0; n < roots->size(); n++)
log_file << complex_str((*roots)[n]) << ",";
}
#if defined(DEBUG) || !defined(NDEBUG)
if (iter % 500 == 0) {
std::cout << "Iter: " << iter << "\t";
for (n = 0; n < roots->size(); n++)
std::cout << "\t" << complex_str((*roots)[n]);
std::cout << "\t\tabsolute average change: " << tol_condition
<< "\n";
}
#endif
if (log_file.is_open())
log_file << tol_condition;
}
return std::pair<uint32_t, long double>(iter, tol_condition);
}
/**
* Self test the algorithm by checking the roots for \f$x^2+4=0\f$ to which the
* roots are \f$0 \pm 2i\f$
*/
void test1() {
const std::valarray<double> coeffs = {1, 0, 4}; // x^2 - 2 = 0
std::valarray<std::complex<double>> roots(2);
std::valarray<std::complex<double>> expected = {
std::complex<double>(0., 2.),
std::complex<double>(0., -2.) // known expected roots
};
/* initialize root approximations with random values */
for (int n = 0; n < roots.size(); n++) {
roots[n] = std::complex<double>(std::rand() % 100, std::rand() % 100);
roots[n] -= 50.f;
roots[n] /= 25.f;
}
auto result = durand_kerner_algo(coeffs, &roots, false);
for (int i = 0; i < roots.size(); i++) {
// check if approximations are have < 0.1% error with one of the
// expected roots
bool err1 = false;
for (int j = 0; j < roots.size(); j++)
err1 |= std::abs(std::abs(roots[i] - expected[j])) < 1e-3;
assert(err1);
}
std::cout << "Test 1 passed! - " << result.first << " iterations, "
<< result.second << " accuracy"
<< "\n";
}
/**
* Self test the algorithm by checking the roots for \f$0.015625x^3-1=0\f$ to
* which the roots are \f$(4+0i),\,(-2\pm3.464i)\f$
*/
void test2() {
const std::valarray<double> coeffs = {// 0.015625 x^3 - 1 = 0
1. / 64., 0., 0., -1.};
std::valarray<std::complex<double>> roots(3);
const std::valarray<std::complex<double>> expected = {
std::complex<double>(4., 0.), std::complex<double>(-2., 3.46410162),
std::complex<double>(-2., -3.46410162) // known expected roots
};
/* initialize root approximations with random values */
for (int n = 0; n < roots.size(); n++) {
roots[n] = std::complex<double>(std::rand() % 100, std::rand() % 100);
roots[n] -= 50.f;
roots[n] /= 25.f;
}
auto result = durand_kerner_algo(coeffs, &roots, false);
for (int i = 0; i < roots.size(); i++) {
// check if approximations are have < 0.1% error with one of the
// expected roots
bool err1 = false;
for (int j = 0; j < roots.size(); j++)
err1 |= std::abs(std::abs(roots[i] - expected[j])) < 1e-3;
assert(err1);
}
std::cout << "Test 2 passed! - " << result.first << " iterations, "
<< result.second << " accuracy"
<< "\n";
}
/***
* Main function.
* The comandline input arguments are taken as coeffiecients of a
*polynomial. For example, this command
* ```sh
* ./durand_kerner_roots 1 0 -4
* ```
* will find roots of the polynomial \f$1\cdot x^2 + 0\cdot x^1 + (-4)=0\f$
**/
int main(int argc, char **argv) {
/* initialize random seed: */
std::srand(std::time(nullptr));
if (argc < 2) {
test1(); // run tests when no input is provided
test2(); // and skip tests when input polynomial is provided
std::cout << "Please pass the coefficients of the polynomial as "
"commandline "
"arguments.\n";
return 0;
}
int n, degree = argc - 1; // detected polynomial degree
std::valarray<double> coeffs(degree); // create coefficiencts array
// number of roots = degree - 1
std::valarray<std::complex<double>> s0(degree - 1);
std::cout << "Computing the roots for:\n\t";
for (n = 0; n < degree; n++) {
coeffs[n] = strtod(argv[n + 1], nullptr);
if (n < degree - 1 && coeffs[n] != 0)
std::cout << "(" << coeffs[n] << ") x^" << degree - n - 1 << " + ";
else if (coeffs[n] != 0)
std::cout << "(" << coeffs[n] << ") x^" << degree - n - 1
<< " = 0\n";
/* initialize root approximations with random values */
if (n < degree - 1) {
s0[n] = std::complex<double>(std::rand() % 100, std::rand() % 100);
s0[n] -= 50.f;
s0[n] /= 50.f;
}
}
// numerical errors less when the first coefficient is "1"
// hence, we normalize the first coefficient
{
double tmp = coeffs[0];
coeffs /= tmp;
}
clock_t end_time, start_time = clock();
auto result = durand_kerner_algo(coeffs, &s0, true);
end_time = clock();
std::cout << "\nIterations: " << result.first << "\n";
for (n = 0; n < degree - 1; n++)
std::cout << "\t" << complex_str(s0[n]) << "\n";
std::cout << "absolute average change: " << result.second << "\n";
std::cout << "Time taken: "
<< static_cast<double>(end_time - start_time) / CLOCKS_PER_SEC
<< " sec\n";
return 0;
}

View File

@@ -0,0 +1,74 @@
/**
* \file
* \brief Solve the equation \f$f(x)=0\f$ using [false position
* method](https://en.wikipedia.org/wiki/Regula_falsi), also known as the Secant
* method
*
* Given two points \f$a\f$ and \f$b\f$ such that \f$f(a)<0\f$ and
* \f$f(b)>0\f$, then the \f$(i+1)^\text{th}\f$ approximation is given by: \f[
* x_{i+1} = \frac{a_i\cdot f(b_i) - b_i\cdot f(a_i)}{f(b_i) - f(a_i)}
* \f]
* For the next iteration, the interval is selected
* as: \f$[a,x]\f$ if \f$x>0\f$ or \f$[x,b]\f$ if \f$x<0\f$. The Process is
* continued till a close enough approximation is achieved.
*
* \see newton_raphson_method.cpp, bisection_method.cpp
*/
#include <cmath>
#include <cstdlib>
#include <iostream>
#include <limits>
#define EPSILON \
1e-6 // std::numeric_limits<double>::epsilon() ///< system accuracy limit
#define MAX_ITERATIONS 50000 ///< Maximum number of iterations to check
/** define \f$f(x)\f$ to find root for
*/
static double eq(double i) {
return (std::pow(i, 3) - (4 * i) - 9); // origial equation
}
/** get the sign of any given number */
template <typename T>
int sgn(T val) {
return (T(0) < val) - (val < T(0));
}
/** main function */
int main() {
double a = -1, b = 1, x, z, m, n, c;
int i;
// loop to find initial intervals a, b
for (int i = 0; i < MAX_ITERATIONS; i++) {
z = eq(a);
x = eq(b);
if (sgn(z) == sgn(x)) { // same signs, increase interval
b++;
a--;
} else { // if opposite signs, we got our interval
break;
}
}
std::cout << "\nFirst initial: " << a;
std::cout << "\nSecond initial: " << b;
for (i = 0; i < MAX_ITERATIONS; i++) {
m = eq(a);
n = eq(b);
c = ((a * n) - (b * m)) / (n - m);
a = c;
z = eq(c);
if (std::abs(z) < EPSILON) { // stoping criteria
break;
}
}
std::cout << "\n\nRoot: " << c << "\t\tSteps: " << i << std::endl;
return 0;
}

View File

@@ -0,0 +1,76 @@
/**
* \file
* \brief [Gaussian elimination
* method](https://en.wikipedia.org/wiki/Gaussian_elimination)
*/
#include <iostream>
/** Main function */
int main() {
int mat_size, i, j, step;
std::cout << "Matrix size: ";
std::cin >> mat_size;
// create a 2D matrix by dynamic memory allocation
double **mat = new double *[mat_size + 1], **x = new double *[mat_size];
for (i = 0; i <= mat_size; i++) {
mat[i] = new double[mat_size + 1];
if (i < mat_size)
x[i] = new double[mat_size + 1];
}
// get the matrix elements from user
std::cout << std::endl << "Enter value of the matrix: " << std::endl;
for (i = 0; i < mat_size; i++) {
for (j = 0; j <= mat_size; j++) {
std::cin >>
mat[i][j]; // Enter (mat_size*mat_size) value of the matrix.
}
}
// perform Gaussian elimination
for (step = 0; step < mat_size - 1; step++) {
for (i = step; i < mat_size - 1; i++) {
double a = (mat[i + 1][step] / mat[step][step]);
for (j = step; j <= mat_size; j++)
mat[i + 1][j] = mat[i + 1][j] - (a * mat[step][j]);
}
}
std::cout << std::endl
<< "Matrix using Gaussian Elimination method: " << std::endl;
for (i = 0; i < mat_size; i++) {
for (j = 0; j <= mat_size; j++) {
x[i][j] = mat[i][j];
std::cout << mat[i][j] << " ";
}
std::cout << std::endl;
}
std::cout << std::endl
<< "Value of the Gaussian Elimination method: " << std::endl;
for (i = mat_size - 1; i >= 0; i--) {
double sum = 0;
for (j = mat_size - 1; j > i; j--) {
x[i][j] = x[j][j] * x[i][j];
sum = x[i][j] + sum;
}
if (x[i][i] == 0)
x[i][i] = 0;
else
x[i][i] = (x[i][mat_size] - sum) / (x[i][i]);
std::cout << "x" << i << "= " << x[i][i] << std::endl;
}
for (i = 0; i <= mat_size; i++) {
delete[] mat[i];
if (i < mat_size)
delete[] x[i];
}
delete[] mat;
delete[] x;
return 0;
}

View File

@@ -0,0 +1,126 @@
/**
* \file
* \brief [LU decomposition](https://en.wikipedia.org/wiki/LU_decompositon) of a
* square matrix
* \author [Krishna Vedala](https://github.com/kvedala)
*/
#include <ctime>
#include <iomanip>
#include <iostream>
#include <vector>
#ifdef _OPENMP
#include <omp.h>
#endif
/** Perform LU decomposition on matrix
* \param[in] A matrix to decompose
* \param[out] L output L matrix
* \param[out] U output U matrix
* \returns 0 if no errors
* \returns negative if error occurred
*/
int lu_decomposition(const std::vector<std::vector<double>> &A,
std::vector<std::vector<double>> *L,
std::vector<std::vector<double>> *U) {
int row, col, j;
int mat_size = A.size();
if (mat_size != A[0].size()) {
// check matrix is a square matrix
std::cerr << "Not a square matrix!\n";
return -1;
}
// regularize each row
for (row = 0; row < mat_size; row++) {
// Upper triangular matrix
#ifdef _OPENMP
#pragma omp for
#endif
for (col = row; col < mat_size; col++) {
// Summation of L[i,j] * U[j,k]
double lu_sum = 0.;
for (j = 0; j < row; j++) lu_sum += L[0][row][j] * U[0][j][col];
// Evaluate U[i,k]
U[0][row][col] = A[row][col] - lu_sum;
}
// Lower triangular matrix
#ifdef _OPENMP
#pragma omp for
#endif
for (col = row; col < mat_size; col++) {
if (row == col) {
L[0][row][col] = 1.;
continue;
}
// Summation of L[i,j] * U[j,k]
double lu_sum = 0.;
for (j = 0; j < row; j++) lu_sum += L[0][col][j] * U[0][j][row];
// Evaluate U[i,k]
L[0][col][row] = (A[col][row] - lu_sum) / U[0][row][row];
}
}
return 0;
}
/**
* operator to print a matrix
*/
template <typename T>
std::ostream &operator<<(std::ostream &out,
std::vector<std::vector<T>> const &v) {
const int width = 10;
const char separator = ' ';
for (size_t row = 0; row < v.size(); row++) {
for (size_t col = 0; col < v[row].size(); col++)
out << std::left << std::setw(width) << std::setfill(separator)
<< v[row][col];
out << std::endl;
}
return out;
}
/** Main function */
int main(int argc, char **argv) {
int mat_size = 3; // default matrix size
const int range = 50;
const int range2 = range >> 1;
if (argc == 2)
mat_size = atoi(argv[1]);
std::srand(std::time(NULL)); // random number initializer
/* Create a square matrix with random values */
std::vector<std::vector<double>> A(mat_size);
std::vector<std::vector<double>> L(mat_size); // output
std::vector<std::vector<double>> U(mat_size); // output
for (int i = 0; i < mat_size; i++) {
// calloc so that all valeus are '0' by default
A[i] = std::vector<double>(mat_size);
L[i] = std::vector<double>(mat_size);
U[i] = std::vector<double>(mat_size);
for (int j = 0; j < mat_size; j++)
/* create random values in the limits [-range2, range-1] */
A[i][j] = static_cast<double>(std::rand() % range - range2);
}
std::clock_t start_t = std::clock();
lu_decomposition(A, &L, &U);
std::clock_t end_t = std::clock();
std::cout << "Time taken: "
<< static_cast<double>(end_t - start_t) / CLOCKS_PER_SEC << "\n";
std::cout << "A = \n" << A << "\n";
std::cout << "L = \n" << L << "\n";
std::cout << "U = \n" << U << "\n";
return 0;
}

View File

@@ -0,0 +1,59 @@
/**
* \file
* \brief Solve the equation \f$f(x)=0\f$ using [Newton-Raphson
* method](https://en.wikipedia.org/wiki/Newton%27s_method) for both real and
* complex solutions
*
* The \f$(i+1)^\text{th}\f$ approximation is given by:
* \f[
* x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}
* \f]
*
* \author [Krishna Vedala](https://github.com/kvedala)
* \see bisection_method.cpp, false_position.cpp
*/
#include <cmath>
#include <ctime>
#include <iostream>
#include <limits>
#define EPSILON \
1e-6 // std::numeric_limits<double>::epsilon() ///< system accuracy limit
#define MAX_ITERATIONS 50000 ///< Maximum number of iterations to check
/** define \f$f(x)\f$ to find root for
*/
static double eq(double i) {
return (std::pow(i, 3) - (4 * i) - 9); // original equation
}
/** define the derivative function \f$f'(x)\f$
*/
static double eq_der(double i) {
return ((3 * std::pow(i, 2)) - 4); // derivative of equation
}
/** Main function */
int main() {
std::srand(std::time(nullptr)); // initialize randomizer
double z, c = std::rand() % 100, m, n;
int i;
std::cout << "\nInitial approximation: " << c;
// start iterations
for (i = 0; i < MAX_ITERATIONS; i++) {
m = eq(c);
n = eq_der(c);
z = c - (m / n);
c = z;
if (std::abs(m) < EPSILON) // stoping criteria
break;
}
std::cout << "\n\nRoot: " << z << "\t\tSteps: " << i << std::endl;
return 0;
}

View File

@@ -0,0 +1,210 @@
/**
* \file
* \authors [Krishna Vedala](https://github.com/kvedala)
* \brief Solve a multivariable first order [ordinary differential equation
* (ODEs)](https://en.wikipedia.org/wiki/Ordinary_differential_equation) using
* [forward Euler
* method](https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equations#Euler_method)
*
* \details
* The ODE being solved is:
* \f{eqnarray*}{
* \dot{u} &=& v\\
* \dot{v} &=& -\omega^2 u\\
* \omega &=& 1\\
* [x_0, u_0, v_0] &=& [0,1,0]\qquad\ldots\text{(initial values)}
* \f}
* The exact solution for the above problem is:
* \f{eqnarray*}{
* u(x) &=& \cos(x)\\
* v(x) &=& -\sin(x)\\
* \f}
* The computation results are stored to a text file `forward_euler.csv` and the
* exact soltuion results in `exact.csv` for comparison.
* <img
* src="https://raw.githubusercontent.com/TheAlgorithms/C-Plus-Plus/docs/images/numerical_methods/ode_forward_euler.svg"
* alt="Implementation solution"/>
*
* To implement [Van der Pol
* oscillator](https://en.wikipedia.org/wiki/Van_der_Pol_oscillator), change the
* ::problem function to:
* ```cpp
* const double mu = 2.0;
* dy[0] = y[1];
* dy[1] = mu * (1.f - y[0] * y[0]) * y[1] - y[0];
* ```
* \see ode_midpoint_euler.cpp, ode_semi_implicit_euler.cpp
*/
#include <cmath>
#include <ctime>
#include <fstream>
#include <iostream>
#include <valarray>
/**
* @brief Problem statement for a system with first-order differential
* equations. Updates the system differential variables.
* \note This function can be updated to and ode of any order.
*
* @param[in] x independent variable(s)
* @param[in,out] y dependent variable(s)
* @param[in,out] dy first-derivative of dependent variable(s)
*/
void problem(const double &x, std::valarray<double> *y,
std::valarray<double> *dy) {
const double omega = 1.F; // some const for the problem
dy[0][0] = y[0][1]; // x dot
dy[0][1] = -omega * omega * y[0][0]; // y dot
}
/**
* @brief Exact solution of the problem. Used for solution comparison.
*
* @param[in] x independent variable
* @param[in,out] y dependent variable
*/
void exact_solution(const double &x, std::valarray<double> *y) {
y[0][0] = std::cos(x);
y[0][1] = -std::sin(x);
}
/** \addtogroup ode Ordinary Differential Equations
* Integration functions for implementations with solving [ordinary differential
* equations](https://en.wikipedia.org/wiki/Ordinary_differential_equation)
* (ODEs) of any order and and any number of independent variables.
* @{
*/
/**
* @brief Compute next step approximation using the forward-Euler
* method. @f[y_{n+1}=y_n + dx\cdot f\left(x_n,y_n\right)@f]
* @param[in] dx step size
* @param[in] x take \f$x_n\f$ and compute \f$x_{n+1}\f$
* @param[in,out] y take \f$y_n\f$ and compute \f$y_{n+1}\f$
* @param[in,out] dy compute \f$f\left(x_n,y_n\right)\f$
*/
void forward_euler_step(const double dx, const double &x,
std::valarray<double> *y, std::valarray<double> *dy) {
problem(x, y, dy);
y[0] += dy[0] * dx;
}
/**
* @brief Compute approximation using the forward-Euler
* method in the given limits.
* @param[in] dx step size
* @param[in] x0 initial value of independent variable
* @param[in] x_max final value of independent variable
* @param[in,out] y take \f$y_n\f$ and compute \f$y_{n+1}\f$
* @param[in] save_to_file flag to save results to a CSV file (1) or not (0)
* @returns time taken for computation in seconds
*/
double forward_euler(double dx, double x0, double x_max,
std::valarray<double> *y, bool save_to_file = false) {
std::valarray<double> dy = y[0];
std::ofstream fp;
if (save_to_file) {
fp.open("forward_euler.csv", std::ofstream::out);
if (!fp.is_open()) {
std::perror("Error! ");
}
}
std::size_t L = y->size();
/* start integration */
std::clock_t t1 = std::clock();
double x = x0;
do { // iterate for each step of independent variable
if (save_to_file && fp.is_open()) {
// write to file
fp << x << ",";
for (int i = 0; i < L - 1; i++) {
fp << y[0][i] << ",";
}
fp << y[0][L - 1] << "\n";
}
forward_euler_step(dx, x, y, &dy); // perform integration
x += dx; // update step
} while (x <= x_max); // till upper limit of independent variable
/* end of integration */
std::clock_t t2 = std::clock();
if (fp.is_open())
fp.close();
return static_cast<double>(t2 - t1) / CLOCKS_PER_SEC;
}
/** @} */
/**
* Function to compute and save exact solution for comparison
*
* \param [in] X0 initial value of independent variable
* \param [in] X_MAX final value of independent variable
* \param [in] step_size independent variable step size
* \param [in] Y0 initial values of dependent variables
*/
void save_exact_solution(const double &X0, const double &X_MAX,
const double &step_size,
const std::valarray<double> &Y0) {
double x = X0;
std::valarray<double> y = Y0;
std::ofstream fp("exact.csv", std::ostream::out);
if (!fp.is_open()) {
std::perror("Error! ");
return;
}
std::cout << "Finding exact solution\n";
std::clock_t t1 = std::clock();
do {
fp << x << ",";
for (int i = 0; i < y.size() - 1; i++) {
fp << y[i] << ",";
}
fp << y[y.size() - 1] << "\n";
exact_solution(x, &y);
x += step_size;
} while (x <= X_MAX);
std::clock_t t2 = std::clock();
double total_time = static_cast<double>(t2 - t1) / CLOCKS_PER_SEC;
std::cout << "\tTime = " << total_time << " ms\n";
fp.close();
}
/**
* Main Function
*/
int main(int argc, char *argv[]) {
double X0 = 0.f; /* initial value of x0 */
double X_MAX = 10.F; /* upper limit of integration */
std::valarray<double> Y0 = {1.f, 0.f}; /* initial value Y = y(x = x_0) */
double step_size;
if (argc == 1) {
std::cout << "\nEnter the step size: ";
std::cin >> step_size;
} else {
// use commandline argument as independent variable step size
step_size = std::atof(argv[1]);
}
// get approximate solution
double total_time = forward_euler(step_size, X0, X_MAX, &Y0, true);
std::cout << "\tTime = " << total_time << " ms\n";
/* compute exact solution for comparion */
save_exact_solution(X0, X_MAX, step_size, Y0);
return 0;
}

View File

@@ -0,0 +1,214 @@
/**
* \file
* \authors [Krishna Vedala](https://github.com/kvedala)
* \brief Solve a multivariable first order [ordinary differential equation
* (ODEs)](https://en.wikipedia.org/wiki/Ordinary_differential_equation) using
* [midpoint Euler
* method](https://en.wikipedia.org/wiki/Midpoint_method)
*
* \details
* The ODE being solved is:
* \f{eqnarray*}{
* \dot{u} &=& v\\
* \dot{v} &=& -\omega^2 u\\
* \omega &=& 1\\
* [x_0, u_0, v_0] &=& [0,1,0]\qquad\ldots\text{(initial values)}
* \f}
* The exact solution for the above problem is:
* \f{eqnarray*}{
* u(x) &=& \cos(x)\\
* v(x) &=& -\sin(x)\\
* \f}
* The computation results are stored to a text file `midpoint_euler.csv` and
* the exact soltuion results in `exact.csv` for comparison. <img
* src="https://raw.githubusercontent.com/TheAlgorithms/C-Plus-Plus/docs/images/numerical_methods/ode_midpoint_euler.svg"
* alt="Implementation solution"/>
*
* To implement [Van der Pol
* oscillator](https://en.wikipedia.org/wiki/Van_der_Pol_oscillator), change the
* ::problem function to:
* ```cpp
* const double mu = 2.0;
* dy[0] = y[1];
* dy[1] = mu * (1.f - y[0] * y[0]) * y[1] - y[0];
* ```
* \see ode_forward_euler.cpp, ode_semi_implicit_euler.cpp
*/
#include <cmath>
#include <ctime>
#include <fstream>
#include <iostream>
#include <valarray>
/**
* @brief Problem statement for a system with first-order differential
* equations. Updates the system differential variables.
* \note This function can be updated to and ode of any order.
*
* @param[in] x independent variable(s)
* @param[in,out] y dependent variable(s)
* @param[in,out] dy first-derivative of dependent variable(s)
*/
void problem(const double &x, std::valarray<double> *y,
std::valarray<double> *dy) {
const double omega = 1.F; // some const for the problem
dy[0][0] = y[0][1]; // x dot
dy[0][1] = -omega * omega * y[0][0]; // y dot
}
/**
* @brief Exact solution of the problem. Used for solution comparison.
*
* @param[in] x independent variable
* @param[in,out] y dependent variable
*/
void exact_solution(const double &x, std::valarray<double> *y) {
y[0][0] = std::cos(x);
y[0][1] = -std::sin(x);
}
/** \addtogroup ode Ordinary Differential Equations
* @{
*/
/**
* @brief Compute next step approximation using the midpoint-Euler
* method.
* @f[y_{n+1} = y_n + dx\, f\left(x_n+\frac{1}{2}dx,
* y_n + \frac{1}{2}dx\,f\left(x_n,y_n\right)\right)@f]
*
* @param[in] dx step size
* @param[in] x take \f$x_n\f$ and compute \f$x_{n+1}\f$
* @param[in,out] y take \f$y_n\f$ and compute \f$y_{n+1}\f$
* @param[in,out] dy compute \f$f\left(x_n,y_n\right)\f$
*/
void midpoint_euler_step(const double dx, const double &x,
std::valarray<double> *y, std::valarray<double> *dy) {
problem(x, y, dy);
double tmp_x = x + 0.5 * dx;
std::valarray<double> tmp_y = y[0] + dy[0] * (0.5 * dx);
problem(tmp_x, &tmp_y, dy);
y[0] += dy[0] * dx;
}
/**
* @brief Compute approximation using the midpoint-Euler
* method in the given limits.
* @param[in] dx step size
* @param[in] x0 initial value of independent variable
* @param[in] x_max final value of independent variable
* @param[in,out] y take \f$y_n\f$ and compute \f$y_{n+1}\f$
* @param[in] save_to_file flag to save results to a CSV file (1) or not (0)
* @returns time taken for computation in seconds
*/
double midpoint_euler(double dx, double x0, double x_max,
std::valarray<double> *y, bool save_to_file = false) {
std::valarray<double> dy = y[0];
std::ofstream fp;
if (save_to_file) {
fp.open("midpoint_euler.csv", std::ofstream::out);
if (!fp.is_open()) {
std::perror("Error! ");
}
}
std::size_t L = y->size();
/* start integration */
std::clock_t t1 = std::clock();
double x = x0;
do { // iterate for each step of independent variable
if (save_to_file && fp.is_open()) {
// write to file
fp << x << ",";
for (int i = 0; i < L - 1; i++) {
fp << y[0][i] << ",";
}
fp << y[0][L - 1] << "\n";
}
midpoint_euler_step(dx, x, y, &dy); // perform integration
x += dx; // update step
} while (x <= x_max); // till upper limit of independent variable
/* end of integration */
std::clock_t t2 = std::clock();
if (fp.is_open())
fp.close();
return static_cast<double>(t2 - t1) / CLOCKS_PER_SEC;
}
/** @} */
/**
* Function to compute and save exact solution for comparison
*
* \param [in] X0 initial value of independent variable
* \param [in] X_MAX final value of independent variable
* \param [in] step_size independent variable step size
* \param [in] Y0 initial values of dependent variables
*/
void save_exact_solution(const double &X0, const double &X_MAX,
const double &step_size,
const std::valarray<double> &Y0) {
double x = X0;
std::valarray<double> y = Y0;
std::ofstream fp("exact.csv", std::ostream::out);
if (!fp.is_open()) {
std::perror("Error! ");
return;
}
std::cout << "Finding exact solution\n";
std::clock_t t1 = std::clock();
do {
fp << x << ",";
for (int i = 0; i < y.size() - 1; i++) {
fp << y[i] << ",";
}
fp << y[y.size() - 1] << "\n";
exact_solution(x, &y);
x += step_size;
} while (x <= X_MAX);
std::clock_t t2 = std::clock();
double total_time = static_cast<double>(t2 - t1) / CLOCKS_PER_SEC;
std::cout << "\tTime = " << total_time << " ms\n";
fp.close();
}
/**
* Main Function
*/
int main(int argc, char *argv[]) {
double X0 = 0.f; /* initial value of x0 */
double X_MAX = 10.F; /* upper limit of integration */
std::valarray<double> Y0 = {1.f, 0.f}; /* initial value Y = y(x = x_0) */
double step_size;
if (argc == 1) {
std::cout << "\nEnter the step size: ";
std::cin >> step_size;
} else {
// use commandline argument as independent variable step size
step_size = std::atof(argv[1]);
}
// get approximate solution
double total_time = midpoint_euler(step_size, X0, X_MAX, &Y0, true);
std::cout << "\tTime = " << total_time << " ms\n";
/* compute exact solution for comparion */
save_exact_solution(X0, X_MAX, step_size, Y0);
return 0;
}

View File

@@ -0,0 +1,211 @@
/**
* \file
* \authors [Krishna Vedala](https://github.com/kvedala)
* \brief Solve a multivariable first order [ordinary differential equation
* (ODEs)](https://en.wikipedia.org/wiki/Ordinary_differential_equation) using
* [semi implicit Euler
* method](https://en.wikipedia.org/wiki/Semi-implicit_Euler_method)
*
* \details
* The ODE being solved is:
* \f{eqnarray*}{
* \dot{u} &=& v\\
* \dot{v} &=& -\omega^2 u\\
* \omega &=& 1\\
* [x_0, u_0, v_0] &=& [0,1,0]\qquad\ldots\text{(initial values)}
* \f}
* The exact solution for the above problem is:
* \f{eqnarray*}{
* u(x) &=& \cos(x)\\
* v(x) &=& -\sin(x)\\
* \f}
* The computation results are stored to a text file `semi_implicit_euler.csv`
* and the exact soltuion results in `exact.csv` for comparison. <img
* src="https://raw.githubusercontent.com/TheAlgorithms/C-Plus-Plus/docs/images/numerical_methods/ode_semi_implicit_euler.svg"
* alt="Implementation solution"/>
*
* To implement [Van der Pol
* oscillator](https://en.wikipedia.org/wiki/Van_der_Pol_oscillator), change the
* ::problem function to:
* ```cpp
* const double mu = 2.0;
* dy[0] = y[1];
* dy[1] = mu * (1.f - y[0] * y[0]) * y[1] - y[0];
* ```
* \see ode_midpoint_euler.cpp, ode_forward_euler.cpp
*/
#include <cmath>
#include <ctime>
#include <fstream>
#include <iostream>
#include <valarray>
/**
* @brief Problem statement for a system with first-order differential
* equations. Updates the system differential variables.
* \note This function can be updated to and ode of any order.
*
* @param[in] x independent variable(s)
* @param[in,out] y dependent variable(s)
* @param[in,out] dy first-derivative of dependent variable(s)
*/
void problem(const double &x, std::valarray<double> *y,
std::valarray<double> *dy) {
const double omega = 1.F; // some const for the problem
dy[0][0] = y[0][1]; // x dot
dy[0][1] = -omega * omega * y[0][0]; // y dot
}
/**
* @brief Exact solution of the problem. Used for solution comparison.
*
* @param[in] x independent variable
* @param[in,out] y dependent variable
*/
void exact_solution(const double &x, std::valarray<double> *y) {
y[0][0] = std::cos(x);
y[0][1] = -std::sin(x);
}
/** \addtogroup ode Ordinary Differential Equations
* @{
*/
/**
* @brief Compute next step approximation using the semi-implicit-Euler
* method. @f[y_{n+1}=y_n + dx\cdot f\left(x_n,y_n\right)@f]
* @param[in] dx step size
* @param[in] x take \f$x_n\f$ and compute \f$x_{n+1}\f$
* @param[in,out] y take \f$y_n\f$ and compute \f$y_{n+1}\f$
* @param[in,out] dy compute \f$f\left(x_n,y_n\right)\f$
*/
void semi_implicit_euler_step(const double dx, const double &x,
std::valarray<double> *y,
std::valarray<double> *dy) {
problem(x, y, dy); // update dy once
y[0][0] += dx * dy[0][0]; // update y0
problem(x, y, dy); // update dy once more
dy[0][0] = 0.f; // ignore y0
y[0] += dy[0] * dx; // update remaining using new dy
}
/**
* @brief Compute approximation using the semi-implicit-Euler
* method in the given limits.
* @param[in] dx step size
* @param[in] x0 initial value of independent variable
* @param[in] x_max final value of independent variable
* @param[in,out] y take \f$y_n\f$ and compute \f$y_{n+1}\f$
* @param[in] save_to_file flag to save results to a CSV file (1) or not (0)
* @returns time taken for computation in seconds
*/
double semi_implicit_euler(double dx, double x0, double x_max,
std::valarray<double> *y,
bool save_to_file = false) {
std::valarray<double> dy = y[0];
std::ofstream fp;
if (save_to_file) {
fp.open("semi_implicit_euler.csv", std::ofstream::out);
if (!fp.is_open()) {
std::perror("Error! ");
}
}
std::size_t L = y->size();
/* start integration */
std::clock_t t1 = std::clock();
double x = x0;
do { // iterate for each step of independent variable
if (save_to_file && fp.is_open()) {
// write to file
fp << x << ",";
for (int i = 0; i < L - 1; i++) {
fp << y[0][i] << ",";
}
fp << y[0][L - 1] << "\n";
}
semi_implicit_euler_step(dx, x, y, &dy); // perform integration
x += dx; // update step
} while (x <= x_max); // till upper limit of independent variable
/* end of integration */
std::clock_t t2 = std::clock();
if (fp.is_open())
fp.close();
return static_cast<double>(t2 - t1) / CLOCKS_PER_SEC;
}
/** @} */
/**
* Function to compute and save exact solution for comparison
*
* \param [in] X0 initial value of independent variable
* \param [in] X_MAX final value of independent variable
* \param [in] step_size independent variable step size
* \param [in] Y0 initial values of dependent variables
*/
void save_exact_solution(const double &X0, const double &X_MAX,
const double &step_size,
const std::valarray<double> &Y0) {
double x = X0;
std::valarray<double> y = Y0;
std::ofstream fp("exact.csv", std::ostream::out);
if (!fp.is_open()) {
std::perror("Error! ");
return;
}
std::cout << "Finding exact solution\n";
std::clock_t t1 = std::clock();
do {
fp << x << ",";
for (int i = 0; i < y.size() - 1; i++) {
fp << y[i] << ",";
}
fp << y[y.size() - 1] << "\n";
exact_solution(x, &y);
x += step_size;
} while (x <= X_MAX);
std::clock_t t2 = std::clock();
double total_time = static_cast<double>(t2 - t1) / CLOCKS_PER_SEC;
std::cout << "\tTime = " << total_time << " ms\n";
fp.close();
}
/**
* Main Function
*/
int main(int argc, char *argv[]) {
double X0 = 0.f; /* initial value of x0 */
double X_MAX = 10.F; /* upper limit of integration */
std::valarray<double> Y0 = {1.f, 0.f}; /* initial value Y = y(x = x_0) */
double step_size;
if (argc == 1) {
std::cout << "\nEnter the step size: ";
std::cin >> step_size;
} else {
// use commandline argument as independent variable step size
step_size = std::atof(argv[1]);
}
// get approximate solution
double total_time = semi_implicit_euler(step_size, X0, X_MAX, &Y0, true);
std::cout << "\tTime = " << total_time << " ms\n";
/* compute exact solution for comparion */
save_exact_solution(X0, X_MAX, step_size, Y0);
return 0;
}

View File

@@ -0,0 +1,406 @@
/**
* @file
* \brief Linear regression example using [Ordinary least
* squares](https://en.wikipedia.org/wiki/Ordinary_least_squares)
*
* \author [Krishna Vedala](https://github.com/kvedala)
* Program that gets the number of data samples and number of features per
* sample along with output per sample. It applies OLS regression to compute
* the regression output for additional test data samples.
*/
#include <iomanip> // for print formatting
#include <iostream>
#include <vector>
/**
* operator to print a matrix
*/
template <typename T>
std::ostream &operator<<(std::ostream &out,
std::vector<std::vector<T>> const &v) {
const int width = 10;
const char separator = ' ';
for (size_t row = 0; row < v.size(); row++) {
for (size_t col = 0; col < v[row].size(); col++)
out << std::left << std::setw(width) << std::setfill(separator)
<< v[row][col];
out << std::endl;
}
return out;
}
/**
* operator to print a vector
*/
template <typename T>
std::ostream &operator<<(std::ostream &out, std::vector<T> const &v) {
const int width = 15;
const char separator = ' ';
for (size_t row = 0; row < v.size(); row++)
out << std::left << std::setw(width) << std::setfill(separator)
<< v[row];
return out;
}
/**
* function to check if given matrix is a square matrix
* \returns 1 if true, 0 if false
*/
template <typename T>
inline bool is_square(std::vector<std::vector<T>> const &A) {
// Assuming A is square matrix
size_t N = A.size();
for (size_t i = 0; i < N; i++)
if (A[i].size() != N)
return false;
return true;
}
/**
* Matrix multiplication such that if A is size (mxn) and
* B is of size (pxq) then the multiplication is defined
* only when n = p and the resultant matrix is of size (mxq)
*
* \returns resultant matrix
**/
template <typename T>
std::vector<std::vector<T>> operator*(std::vector<std::vector<T>> const &A,
std::vector<std::vector<T>> const &B) {
// Number of rows in A
size_t N_A = A.size();
// Number of columns in B
size_t N_B = B[0].size();
std::vector<std::vector<T>> result(N_A);
if (A[0].size() != B.size()) {
std::cerr << "Number of columns in A != Number of rows in B ("
<< A[0].size() << ", " << B.size() << ")" << std::endl;
return result;
}
for (size_t row = 0; row < N_A; row++) {
std::vector<T> v(N_B);
for (size_t col = 0; col < N_B; col++) {
v[col] = static_cast<T>(0);
for (size_t j = 0; j < B.size(); j++)
v[col] += A[row][j] * B[j][col];
}
result[row] = v;
}
return result;
}
/**
* multiplication of a matrix with a column vector
* \returns resultant vector
*/
template <typename T>
std::vector<T> operator*(std::vector<std::vector<T>> const &A,
std::vector<T> const &B) {
// Number of rows in A
size_t N_A = A.size();
std::vector<T> result(N_A);
if (A[0].size() != B.size()) {
std::cerr << "Number of columns in A != Number of rows in B ("
<< A[0].size() << ", " << B.size() << ")" << std::endl;
return result;
}
for (size_t row = 0; row < N_A; row++) {
result[row] = static_cast<T>(0);
for (size_t j = 0; j < B.size(); j++) result[row] += A[row][j] * B[j];
}
return result;
}
/**
* pre-multiplication of a vector by a scalar
* \returns resultant vector
*/
template <typename T>
std::vector<float> operator*(float const scalar, std::vector<T> const &A) {
// Number of rows in A
size_t N_A = A.size();
std::vector<float> result(N_A);
for (size_t row = 0; row < N_A; row++) {
result[row] += A[row] * static_cast<float>(scalar);
}
return result;
}
/**
* post-multiplication of a vector by a scalar
* \returns resultant vector
*/
template <typename T>
std::vector<float> operator*(std::vector<T> const &A, float const scalar) {
// Number of rows in A
size_t N_A = A.size();
std::vector<float> result(N_A);
for (size_t row = 0; row < N_A; row++)
result[row] = A[row] * static_cast<float>(scalar);
return result;
}
/**
* division of a vector by a scalar
* \returns resultant vector
*/
template <typename T>
std::vector<float> operator/(std::vector<T> const &A, float const scalar) {
return (1.f / scalar) * A;
}
/**
* subtraction of two vectors of identical lengths
* \returns resultant vector
*/
template <typename T>
std::vector<T> operator-(std::vector<T> const &A, std::vector<T> const &B) {
// Number of rows in A
size_t N = A.size();
std::vector<T> result(N);
if (B.size() != N) {
std::cerr << "Vector dimensions shouldbe identical!" << std::endl;
return A;
}
for (size_t row = 0; row < N; row++) result[row] = A[row] - B[row];
return result;
}
/**
* addition of two vectors of identical lengths
* \returns resultant vector
*/
template <typename T>
std::vector<T> operator+(std::vector<T> const &A, std::vector<T> const &B) {
// Number of rows in A
size_t N = A.size();
std::vector<T> result(N);
if (B.size() != N) {
std::cerr << "Vector dimensions shouldbe identical!" << std::endl;
return A;
}
for (size_t row = 0; row < N; row++) result[row] = A[row] + B[row];
return result;
}
/**
* Get matrix inverse using Row-trasnformations. Given matrix must
* be a square and non-singular.
* \returns inverse matrix
**/
template <typename T>
std::vector<std::vector<float>> get_inverse(
std::vector<std::vector<T>> const &A) {
// Assuming A is square matrix
size_t N = A.size();
std::vector<std::vector<float>> inverse(N);
for (size_t row = 0; row < N; row++) {
// preallocatae a resultant identity matrix
inverse[row] = std::vector<float>(N);
for (size_t col = 0; col < N; col++)
inverse[row][col] = (row == col) ? 1.f : 0.f;
}
if (!is_square(A)) {
std::cerr << "A must be a square matrix!" << std::endl;
return inverse;
}
// preallocatae a temporary matrix identical to A
std::vector<std::vector<float>> temp(N);
for (size_t row = 0; row < N; row++) {
std::vector<float> v(N);
for (size_t col = 0; col < N; col++)
v[col] = static_cast<float>(A[row][col]);
temp[row] = v;
}
// start transformations
for (size_t row = 0; row < N; row++) {
for (size_t row2 = row; row2 < N && temp[row][row] == 0; row2++) {
// this to ensure diagonal elements are not 0
temp[row] = temp[row] + temp[row2];
inverse[row] = inverse[row] + inverse[row2];
}
for (size_t col2 = row; col2 < N && temp[row][row] == 0; col2++) {
// this to further ensure diagonal elements are not 0
for (size_t row2 = 0; row2 < N; row2++) {
temp[row2][row] = temp[row2][row] + temp[row2][col2];
inverse[row2][row] = inverse[row2][row] + inverse[row2][col2];
}
}
if (temp[row][row] == 0) {
// Probably a low-rank matrix and hence singular
std::cerr << "Low-rank matrix, no inverse!" << std::endl;
return inverse;
}
// set diagonal to 1
float divisor = static_cast<float>(temp[row][row]);
temp[row] = temp[row] / divisor;
inverse[row] = inverse[row] / divisor;
// Row transformations
for (size_t row2 = 0; row2 < N; row2++) {
if (row2 == row)
continue;
float factor = temp[row2][row];
temp[row2] = temp[row2] - factor * temp[row];
inverse[row2] = inverse[row2] - factor * inverse[row];
}
}
return inverse;
}
/**
* matrix transpose
* \returns resultant matrix
**/
template <typename T>
std::vector<std::vector<T>> get_transpose(
std::vector<std::vector<T>> const &A) {
std::vector<std::vector<T>> result(A[0].size());
for (size_t row = 0; row < A[0].size(); row++) {
std::vector<T> v(A.size());
for (size_t col = 0; col < A.size(); col++) v[col] = A[col][row];
result[row] = v;
}
return result;
}
/**
* Perform Ordinary Least Squares curve fit. This operation is defined as
* \f[\beta = \left(X^TXX^T\right)Y\f]
* \param X feature matrix with rows representing sample vector of features
* \param Y known regression value for each sample
* \returns fitted regression model polynomial coefficients
*/
template <typename T>
std::vector<float> fit_OLS_regressor(std::vector<std::vector<T>> const &X,
std::vector<T> const &Y) {
// NxF
std::vector<std::vector<T>> X2 = X;
for (size_t i = 0; i < X2.size(); i++)
// add Y-intercept -> Nx(F+1)
X2[i].push_back(1);
// (F+1)xN
std::vector<std::vector<T>> Xt = get_transpose(X2);
// (F+1)x(F+1)
std::vector<std::vector<T>> tmp = get_inverse(Xt * X2);
// (F+1)xN
std::vector<std::vector<float>> out = tmp * Xt;
// cout << endl
// << "Projection matrix: " << X2 * out << endl;
// Fx1,1 -> (F+1)^th element is the independent constant
return out * Y;
}
/**
* Given data and OLS model coeffficients, predict
* regression estimates. This operation is defined as
* \f[y_{\text{row}=i} = \sum_{j=\text{columns}}\beta_j\cdot X_{i,j}\f]
*
* \param X feature matrix with rows representing sample vector of features
* \param beta fitted regression model
* \return vector with regression values for each sample
**/
template <typename T>
std::vector<float> predict_OLS_regressor(std::vector<std::vector<T>> const &X,
std::vector<float> const &beta /**< */
) {
std::vector<float> result(X.size());
for (size_t rows = 0; rows < X.size(); rows++) {
// -> start with constant term
result[rows] = beta[X[0].size()];
for (size_t cols = 0; cols < X[0].size(); cols++)
result[rows] += beta[cols] * X[rows][cols];
}
// Nx1
return result;
}
/**
* main function
*/
int main() {
size_t N, F;
std::cout << "Enter number of features: ";
// number of features = columns
std::cin >> F;
std::cout << "Enter number of samples: ";
// number of samples = rows
std::cin >> N;
std::vector<std::vector<float>> data(N);
std::vector<float> Y(N);
std::cout
<< "Enter training data. Per sample, provide features ad one output."
<< std::endl;
for (size_t rows = 0; rows < N; rows++) {
std::vector<float> v(F);
std::cout << "Sample# " << rows + 1 << ": ";
for (size_t cols = 0; cols < F; cols++)
// get the F features
std::cin >> v[cols];
data[rows] = v;
// get the corresponding output
std::cin >> Y[rows];
}
std::vector<float> beta = fit_OLS_regressor(data, Y);
std::cout << std::endl << std::endl << "beta:" << beta << std::endl;
size_t T;
std::cout << "Enter number of test samples: ";
// number of test sample inputs
std::cin >> T;
std::vector<std::vector<float>> data2(T);
// vector<float> Y2(T);
for (size_t rows = 0; rows < T; rows++) {
std::cout << "Sample# " << rows + 1 << ": ";
std::vector<float> v(F);
for (size_t cols = 0; cols < F; cols++) std::cin >> v[cols];
data2[rows] = v;
}
std::vector<float> out = predict_OLS_regressor(data2, beta);
for (size_t rows = 0; rows < T; rows++) std::cout << out[rows] << std::endl;
return 0;
}

View File

@@ -0,0 +1,210 @@
/**
* @file
* \brief Library functions to compute [QR
* decomposition](https://en.wikipedia.org/wiki/QR_decomposition) of a given
* matrix.
* \author [Krishna Vedala](https://github.com/kvedala)
*/
#ifndef NUMERICAL_METHODS_QR_DECOMPOSE_H_
#define NUMERICAL_METHODS_QR_DECOMPOSE_H_
#include <cmath>
#include <cstdlib>
#include <iomanip>
#include <iostream>
#include <limits>
#include <numeric>
#include <valarray>
#ifdef _OPENMP
#include <omp.h>
#endif
/** \namespace qr_algorithm
* \brief Functions to compute [QR
* decomposition](https://en.wikipedia.org/wiki/QR_decomposition) of any
* rectangular matrix
*/
namespace qr_algorithm {
/**
* operator to print a matrix
*/
template <typename T>
std::ostream &operator<<(std::ostream &out,
std::valarray<std::valarray<T>> const &v) {
const int width = 12;
const char separator = ' ';
out.precision(4);
for (size_t row = 0; row < v.size(); row++) {
for (size_t col = 0; col < v[row].size(); col++)
out << std::right << std::setw(width) << std::setfill(separator)
<< v[row][col];
out << std::endl;
}
return out;
}
/**
* operator to print a vector
*/
template <typename T>
std::ostream &operator<<(std::ostream &out, std::valarray<T> const &v) {
const int width = 10;
const char separator = ' ';
out.precision(4);
for (size_t row = 0; row < v.size(); row++) {
out << std::right << std::setw(width) << std::setfill(separator)
<< v[row];
}
return out;
}
/**
* Compute dot product of two vectors of equal lengths
*
* If \f$\vec{a}=\left[a_0,a_1,a_2,...,a_L\right]\f$ and
* \f$\vec{b}=\left[b_0,b_1,b_1,...,b_L\right]\f$ then
* \f$\vec{a}\cdot\vec{b}=\displaystyle\sum_{i=0}^L a_i\times b_i\f$
*
* \returns \f$\vec{a}\cdot\vec{b}\f$
*/
template <typename T>
inline double vector_dot(const std::valarray<T> &a, const std::valarray<T> &b) {
return (a * b).sum();
// could also use following
// return std::inner_product(std::begin(a), std::end(a), std::begin(b),
// 0.f);
}
/**
* Compute magnitude of vector.
*
* If \f$\vec{a}=\left[a_0,a_1,a_2,...,a_L\right]\f$ then
* \f$\left|\vec{a}\right|=\sqrt{\displaystyle\sum_{i=0}^L a_i^2}\f$
*
* \returns \f$\left|\vec{a}\right|\f$
*/
template <typename T>
inline double vector_mag(const std::valarray<T> &a) {
double dot = vector_dot(a, a);
return std::sqrt(dot);
}
/**
* Compute projection of vector \f$\vec{a}\f$ on \f$\vec{b}\f$ defined as
* \f[\text{proj}_\vec{b}\vec{a}=\frac{\vec{a}\cdot\vec{b}}{\left|\vec{b}\right|^2}\vec{b}\f]
*
* \returns NULL if error, otherwise pointer to output
*/
template <typename T>
std::valarray<T> vector_proj(const std::valarray<T> &a,
const std::valarray<T> &b) {
double num = vector_dot(a, b);
double deno = vector_dot(b, b);
/*! check for division by zero using machine epsilon */
if (deno <= std::numeric_limits<double>::epsilon()) {
std::cerr << "[" << __func__ << "] Possible division by zero\n";
return a; // return vector a back
}
double scalar = num / deno;
return b * scalar;
}
/**
* Decompose matrix \f$A\f$ using [Gram-Schmidt
*process](https://en.wikipedia.org/wiki/QR_decomposition).
*
* \f{eqnarray*}{
* \text{given that}\quad A &=&
*\left[\mathbf{a}_1,\mathbf{a}_2,\ldots,\mathbf{a}_{N-1},\right]\\
* \text{where}\quad\mathbf{a}_i &=&
* \left[a_{0i},a_{1i},a_{2i},\ldots,a_{(M-1)i}\right]^T\quad\ldots\mbox{(column
* vectors)}\\
* \text{then}\quad\mathbf{u}_i &=& \mathbf{a}_i
*-\sum_{j=0}^{i-1}\text{proj}_{\mathbf{u}_j}\mathbf{a}_i\\
* \mathbf{e}_i &=&\frac{\mathbf{u}_i}{\left|\mathbf{u}_i\right|}\\
* Q &=& \begin{bmatrix}\mathbf{e}_0 & \mathbf{e}_1 & \mathbf{e}_2 & \dots &
* \mathbf{e}_{N-1}\end{bmatrix}\\
* R &=& \begin{bmatrix}\langle\mathbf{e}_0\,,\mathbf{a}_0\rangle &
* \langle\mathbf{e}_1\,,\mathbf{a}_1\rangle &
* \langle\mathbf{e}_2\,,\mathbf{a}_2\rangle & \dots \\
* 0 & \langle\mathbf{e}_1\,,\mathbf{a}_1\rangle &
* \langle\mathbf{e}_2\,,\mathbf{a}_2\rangle & \dots\\
* 0 & 0 & \langle\mathbf{e}_2\,,\mathbf{a}_2\rangle &
* \dots\\ \vdots & \vdots & \vdots & \ddots
* \end{bmatrix}\\
* \f}
*/
template <typename T>
void qr_decompose(
const std::valarray<std::valarray<T>> &A, /**< input matrix to decompose */
std::valarray<std::valarray<T>> *Q, /**< output decomposed matrix */
std::valarray<std::valarray<T>> *R /**< output decomposed matrix */
) {
std::size_t ROWS = A.size(); // number of rows of A
std::size_t COLUMNS = A[0].size(); // number of columns of A
std::valarray<T> col_vector(ROWS);
std::valarray<T> col_vector2(ROWS);
std::valarray<T> tmp_vector(ROWS);
for (int i = 0; i < COLUMNS; i++) {
/* for each column => R is a square matrix of NxN */
int j;
R[0][i] = 0.; /* make R upper triangular */
/* get corresponding Q vector */
#ifdef _OPENMP
// parallelize on threads
#pragma omp for
#endif
for (j = 0; j < ROWS; j++) {
tmp_vector[j] = A[j][i]; /* accumulator for uk */
col_vector[j] = A[j][i];
}
for (j = 0; j < i; j++) {
for (int k = 0; k < ROWS; k++) {
col_vector2[k] = Q[0][k][j];
}
col_vector2 = vector_proj(col_vector, col_vector2);
tmp_vector -= col_vector2;
}
double mag = vector_mag(tmp_vector);
#ifdef _OPENMP
// parallelize on threads
#pragma omp for
#endif
for (j = 0; j < ROWS; j++) Q[0][j][i] = tmp_vector[j] / mag;
/* compute upper triangular values of R */
#ifdef _OPENMP
// parallelize on threads
#pragma omp for
#endif
for (int kk = 0; kk < ROWS; kk++) {
col_vector[kk] = Q[0][kk][i];
}
#ifdef _OPENMP
// parallelize on threads
#pragma omp for
#endif
for (int k = i; k < COLUMNS; k++) {
for (int kk = 0; kk < ROWS; kk++) {
col_vector2[kk] = A[kk][k];
}
R[0][i][k] = (col_vector * col_vector2).sum();
}
}
}
} // namespace qr_algorithm
#endif // NUMERICAL_METHODS_QR_DECOMPOSE_H_

View File

@@ -0,0 +1,58 @@
/**
* @file
* \brief Program to compute the [QR
* decomposition](https://en.wikipedia.org/wiki/QR_decomposition) of a given
* matrix.
* \author [Krishna Vedala](https://github.com/kvedala)
*/
#include <array>
#include <cmath>
#include <cstdlib>
#include <ctime>
#include <iostream>
#include "./qr_decompose.h"
using qr_algorithm::qr_decompose;
using qr_algorithm::operator<<;
/**
* main function
*/
int main(void) {
unsigned int ROWS, COLUMNS;
std::cout << "Enter the number of rows and columns: ";
std::cin >> ROWS >> COLUMNS;
std::cout << "Enter matrix elements row-wise:\n";
std::valarray<std::valarray<double>> A(ROWS);
std::valarray<std::valarray<double>> Q(ROWS);
std::valarray<std::valarray<double>> R(COLUMNS);
for (int i = 0; i < std::max(ROWS, COLUMNS); i++) {
if (i < ROWS) {
A[i] = std::valarray<double>(COLUMNS);
Q[i] = std::valarray<double>(COLUMNS);
}
if (i < COLUMNS) {
R[i] = std::valarray<double>(COLUMNS);
}
}
for (int i = 0; i < ROWS; i++)
for (int j = 0; j < COLUMNS; j++) std::cin >> A[i][j];
std::cout << A << "\n";
clock_t t1 = clock();
qr_decompose(A, &Q, &R);
double dtime = static_cast<double>(clock() - t1) / CLOCKS_PER_SEC;
std::cout << Q << "\n";
std::cout << R << "\n";
std::cout << "Time taken to compute: " << dtime << " sec\n ";
return 0;
}

View File

@@ -0,0 +1,284 @@
/**
* @file
* \brief Compute real eigen values and eigen vectors of a symmetric matrix
* using [QR decomposition](https://en.wikipedia.org/wiki/QR_decomposition)
* method.
* \author [Krishna Vedala](https://github.com/kvedala)
*/
#include <cassert>
#include <cmath>
#include <cstdlib>
#include <ctime>
#include <iostream>
#ifdef _OPENMP
#include <omp.h>
#endif
#include "./qr_decompose.h"
using qr_algorithm::operator<<;
#define LIMS 9 /**< limit of range of matrix values */
/**
* create a symmetric square matrix of given size with random elements. A
* symmetric square matrix will *always* have real eigen values.
*
* \param[out] A matrix to create (must be pre-allocated in memory)
*/
void create_matrix(std::valarray<std::valarray<double>> *A) {
int i, j, tmp, lim2 = LIMS >> 1;
int N = A->size();
#ifdef _OPENMP
#pragma omp for
#endif
for (i = 0; i < N; i++) {
A[0][i][i] = (std::rand() % LIMS) - lim2;
for (j = i + 1; j < N; j++) {
tmp = (std::rand() % LIMS) - lim2;
A[0][i][j] = tmp; // summetrically distribute random values
A[0][j][i] = tmp;
}
}
}
/**
* Perform multiplication of two matrices.
* * R2 must be equal to C1
* * Resultant matrix size should be R1xC2
* \param[in] A first matrix to multiply
* \param[in] B second matrix to multiply
* \param[out] OUT output matrix (must be pre-allocated)
* \returns pointer to resultant matrix
*/
void mat_mul(const std::valarray<std::valarray<double>> &A,
const std::valarray<std::valarray<double>> &B,
std::valarray<std::valarray<double>> *OUT) {
int R1 = A.size();
int C1 = A[0].size();
int R2 = B.size();
int C2 = B[0].size();
if (C1 != R2) {
perror("Matrix dimensions mismatch!");
return;
}
for (int i = 0; i < R1; i++) {
for (int j = 0; j < C2; j++) {
OUT[0][i][j] = 0.f;
for (int k = 0; k < C1; k++) {
OUT[0][i][j] += A[i][k] * B[k][j];
}
}
}
}
namespace qr_algorithm {
/** Compute eigen values using iterative shifted QR decomposition algorithm as
* follows:
* 1. Use last diagonal element of A as eigen value approximation \f$c\f$
* 2. Shift diagonals of matrix \f$A' = A - cI\f$
* 3. Decompose matrix \f$A'=QR\f$
* 4. Compute next approximation \f$A'_1 = RQ \f$
* 5. Shift diagonals back \f$A_1 = A'_1 + cI\f$
* 6. Termination condition check: last element below diagonal is almost 0
* 1. If not 0, go back to step 1 with the new approximation \f$A_1\f$
* 2. If 0, continue to step 7
* 7. Save last known \f$c\f$ as the eigen value.
* 8. Are all eigen values found?
* 1. If not, remove last row and column of \f$A_1\f$ and go back to step 1.
* 2. If yes, stop.
*
* \note The matrix \f$A\f$ gets modified
*
* \param[in,out] A matrix to compute eigen values for
* \param[in] print_intermediates (optional) whether to print intermediate A, Q
* and R matrices (default = `false`)
*/
std::valarray<double> eigen_values(std::valarray<std::valarray<double>> *A,
bool print_intermediates = false) {
int rows = A->size();
int columns = rows;
int counter = 0, num_eigs = rows - 1;
double last_eig = 0;
std::valarray<std::valarray<double>> Q(rows);
std::valarray<std::valarray<double>> R(columns);
/* number of eigen values = matrix size */
std::valarray<double> eigen_vals(rows);
for (int i = 0; i < rows; i++) {
Q[i] = std::valarray<double>(columns);
R[i] = std::valarray<double>(columns);
}
/* continue till all eigen values are found */
while (num_eigs > 0) {
/* iterate with QR decomposition */
while (std::abs(A[0][num_eigs][num_eigs - 1]) >
std::numeric_limits<double>::epsilon()) {
// initial approximation = last diagonal element
last_eig = A[0][num_eigs][num_eigs];
for (int i = 0; i < rows; i++) {
A[0][i][i] -= last_eig; /* A - cI */
}
qr_decompose(*A, &Q, &R);
if (print_intermediates) {
std::cout << *A << "\n";
std::cout << Q << "\n";
std::cout << R << "\n";
printf("-------------------- %d ---------------------\n",
++counter);
}
// new approximation A' = R * Q
mat_mul(R, Q, A);
for (int i = 0; i < rows; i++) {
A[0][i][i] += last_eig; /* A + cI */
}
}
/* store the converged eigen value */
eigen_vals[num_eigs] = last_eig;
// A[0][num_eigs][num_eigs];
if (print_intermediates) {
std::cout << "========================\n";
std::cout << "Eigen value: " << last_eig << ",\n";
std::cout << "========================\n";
}
num_eigs--;
rows--;
columns--;
}
eigen_vals[0] = A[0][0][0];
if (print_intermediates) {
std::cout << Q << "\n";
std::cout << R << "\n";
}
return eigen_vals;
}
} // namespace qr_algorithm
/**
* test function to compute eigen values of a 2x2 matrix
* \f[\begin{bmatrix}
* 5 & 7\\
* 7 & 11
* \end{bmatrix}\f]
* which are approximately, {15.56158, 0.384227}
*/
void test1() {
std::valarray<std::valarray<double>> X = {{5, 7}, {7, 11}};
double y[] = {15.56158, 0.384227}; // corresponding y-values
std::cout << "------- Test 1 -------" << std::endl;
std::valarray<double> eig_vals = qr_algorithm::eigen_values(&X);
for (int i = 0; i < 2; i++) {
std::cout << i + 1 << "/2 Checking for " << y[i] << " --> ";
bool result = false;
for (int j = 0; j < 2 && !result; j++) {
if (std::abs(y[i] - eig_vals[j]) < 0.1) {
result = true;
std::cout << "(" << eig_vals[j] << ") ";
}
}
assert(result); // ensure that i^th expected eigen value was computed
std::cout << "found\n";
}
std::cout << "Test 1 Passed\n\n";
}
/**
* test function to compute eigen values of a 2x2 matrix
* \f[\begin{bmatrix}
* -4& 4& 2& 0& -3\\
* 4& -4& 4& -3& -1\\
* 2& 4& 4& 3& -3\\
* 0& -3& 3& -1&-1\\
* -3& -1& -3& -3& 0
* \end{bmatrix}\f]
* which are approximately, {9.27648, -9.26948, 2.0181, -1.03516, -5.98994}
*/
void test2() {
std::valarray<std::valarray<double>> X = {{-4, 4, 2, 0, -3},
{4, -4, 4, -3, -1},
{2, 4, 4, 3, -3},
{0, -3, 3, -1, -3},
{-3, -1, -3, -3, 0}};
double y[] = {9.27648, -9.26948, 2.0181, -1.03516,
-5.98994}; // corresponding y-values
std::cout << "------- Test 2 -------" << std::endl;
std::valarray<double> eig_vals = qr_algorithm::eigen_values(&X);
std::cout << X << "\n"
<< "Eigen values: " << eig_vals << "\n";
for (int i = 0; i < 5; i++) {
std::cout << i + 1 << "/5 Checking for " << y[i] << " --> ";
bool result = false;
for (int j = 0; j < 5 && !result; j++) {
if (std::abs(y[i] - eig_vals[j]) < 0.1) {
result = true;
std::cout << "(" << eig_vals[j] << ") ";
}
}
assert(result); // ensure that i^th expected eigen value was computed
std::cout << "found\n";
}
std::cout << "Test 2 Passed\n\n";
}
/**
* main function
*/
int main(int argc, char **argv) {
int mat_size = 5;
if (argc == 2) {
mat_size = atoi(argv[1]);
} else { // if invalid input argument is given run tests
test1();
test2();
std::cout << "Usage: ./qr_eigen_values [mat_size]\n";
return 0;
}
if (mat_size < 2) {
fprintf(stderr, "Matrix size should be > 2\n");
return -1;
}
// initialize random number generator
std::srand(std::time(nullptr));
int i, rows = mat_size, columns = mat_size;
std::valarray<std::valarray<double>> A(rows);
for (int i = 0; i < rows; i++) {
A[i] = std::valarray<double>(columns);
}
/* create a random matrix */
create_matrix(&A);
std::cout << A << "\n";
clock_t t1 = clock();
std::valarray<double> eigen_vals = qr_algorithm::eigen_values(&A);
double dtime = static_cast<double>(clock() - t1) / CLOCKS_PER_SEC;
std::cout << "Eigen vals: ";
for (i = 0; i < mat_size; i++) std::cout << eigen_vals[i] << "\t";
std::cout << "\nTime taken to compute: " << dtime << " sec\n";
return 0;
}

View File

@@ -0,0 +1,40 @@
/**
* \file
* \brief Method of successive approximations using [fixed-point
* iteration](https://en.wikipedia.org/wiki/Fixed-point_iteration) method
*/
#include <cmath>
#include <iostream>
/** equation 1
* \f[f(y) = 3y - \cos y -2\f]
*/
static float eq(float y) { return (3 * y) - cos(y) - 2; }
/** equation 2
* \f[f(y) = \frac{\cos y+2}{2}\f]
*/
static float eqd(float y) { return 0.5 * (cos(y) + 2); }
/** Main function */
int main() {
float y, x1, x2, x3, sum, s, a, f1, f2, gd;
int i, n;
for (i = 0; i < 10; i++) {
sum = eq(y);
std::cout << "value of equation at " << i << " " << sum << "\n";
y++;
}
std::cout << "enter the x1->";
std::cin >> x1;
std::cout << "enter the no iteration to perform->\n";
std::cin >> n;
for (i = 0; i <= n; i++) {
x2 = eqd(x1);
std::cout << "\nenter the x2->" << x2;
x1 = x2;
}
return 0;
}