![Convert the perspective function of log-sum-exp to cvx - CVX Forum: a community-driven support forum Convert the perspective function of log-sum-exp to cvx - CVX Forum: a community-driven support forum](http://ask.cvxr.com/uploads/default/optimized/1X/b23033b58ceb6bf3fda4d47a97e3c2b21204a41a_2_1024x278.png)
Convert the perspective function of log-sum-exp to cvx - CVX Forum: a community-driven support forum
![Entropy | Free Full-Text | Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities Entropy | Free Full-Text | Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities](https://www.mdpi.com/entropy/entropy-18-00442/article_deploy/html/images/entropy-18-00442-g004.png)
Entropy | Free Full-Text | Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities
![On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning – arXiv Vanity On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning – arXiv Vanity](https://media.arxiv-vanity.com/render-output/4251975/x2.png)
On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning – arXiv Vanity
![Elvis Dohmatob on Twitter: "Log-Sum-Exp and negative entropy are convex conjugates (aka Fenchel-Legendre transforms) of one-another.… " Elvis Dohmatob on Twitter: "Log-Sum-Exp and negative entropy are convex conjugates (aka Fenchel-Legendre transforms) of one-another.… "](https://pbs.twimg.com/media/D0pUoFhWoAAtvx-.png)
Elvis Dohmatob on Twitter: "Log-Sum-Exp and negative entropy are convex conjugates (aka Fenchel-Legendre transforms) of one-another.… "
![Gabriel Peyré on Twitter: "The soft-max is the gradient of the log-sum-exp. Central to preform classification using logistic loss. Needs to be stabilised using the log-sum-exp trick. Also at the heart of Gabriel Peyré on Twitter: "The soft-max is the gradient of the log-sum-exp. Central to preform classification using logistic loss. Needs to be stabilised using the log-sum-exp trick. Also at the heart of](https://pbs.twimg.com/media/DUIfES0X0AAOsLm.jpg)
Gabriel Peyré on Twitter: "The soft-max is the gradient of the log-sum-exp. Central to preform classification using logistic loss. Needs to be stabilised using the log-sum-exp trick. Also at the heart of
![PDF] Log-Sum-Exp Neural Networks and Posynomial Models for Convex and Log- Log-Convex Data | Semantic Scholar PDF] Log-Sum-Exp Neural Networks and Posynomial Models for Convex and Log- Log-Convex Data | Semantic Scholar](https://d3i71xaburhd42.cloudfront.net/528e776a4316bc076acee362106e65c6fbf225a4/9-Figure4-1.png)
PDF] Log-Sum-Exp Neural Networks and Posynomial Models for Convex and Log- Log-Convex Data | Semantic Scholar
![Bound to the log-sum-exp function There is a relatively simple way to bound the log-sum-exp by a quadratic function. An upper bound was known for the binary case since 1996. It was due to Jordan and Jaakkola in the context of variational inference for ... Bound to the log-sum-exp function There is a relatively simple way to bound the log-sum-exp by a quadratic function. An upper bound was known for the binary case since 1996. It was due to Jordan and Jaakkola in the context of variational inference for ...](http://statlearn.free.fr/logsumexpbnd/logsumexpbnd_html_m7d6af51d.png)