Machine Learning and Artificial Intelligence
Exponential growth of computing power has made possible the efficient application of perceptron and graph theoretic solution concepts to networks in e-Commerce, merger and acquisition activity, social networks and numerous other fields with substantial and growing economic importance. My work has translated traditional economic ‘comparative statics’ models, central to the study of information technology markets, into machine learning models that can highlight emergent behavior and short period phenomena unavailable in comparative statics. I am particularly impressed with the deep-learning tools from the Google Brain group, who are building an expanding array of products to provide viable “big data” alternatives to traditional statistics. Their tools are rooted in tensor representations of the real world (just as statistics has evolved mainly from vector and matrix models) and offer many new metrics for the analysis of markets that have equal or greater impact on their performance than simple price and value metrics. Both offer opportunities for deeper understanding of technology management and markets using computationally intensive methods.
I am interested in making sense out of the extremely large datasets that are increasingly available online and from industry. I take advantage of new, computationally intensive analysis tools, statistical packages, machine learning and visualization tools to create human-digestable analyses and graphics. I am particularly interested in forensic statistical methods adapted from fields such as actuarial science, information science and physics. I have been adapting social networks analysis and graphics, using R, Tensorflow and Mathematica for the creation of inference models can be designed with a variety of probability, distance or weight metrics, and can be constructed through junction tree propagation algorithms with discrete and continuous evidence.
My work in electronic commerce grew out of studies of stock-market automation (a very dynamic area in the early 1990s, as the number of stock exchanges nearly doubled, partly in response to the crash of 1987, with most new exchanges being electronic). Electronic commerce platforms have since proliferated in such diverse areas as service and asset sharing, exchange of difficult to value assets, code bases and ideas, and governance.
I am Editor-in-Chief of one of the oldest electronic commerce journals: Electronic Commerce Research. During my tenure I have moved the journal from its more traditional telecommunications and management information systems focus to a more comprehensive journal embracing cutting edge research in business, economics, computer science, machine learning and AI, politics and social science. The journal has steadily improved its bibliometric rank, and is now indexed in the most important repositories.
Information Technology Economics and Valuation
I worked for several years as a CPA in Chicago and Dallas, where I accrued experience in finance and accounting. I have continued some of these work in my academic studies into the economic consequences of technlogy management and investment. Much of my early work at the University of Southern California’s Center for Software Engineering was on software metrics. There I developed the statistics for the Cocomo 2.0 software costing model. My work on software metrics led me to address the more general problem of IT valuation. Much of the value of information technology is contained in intangible assets – software, methods, ongoing services, and options. These are assets that traditional accounting and finance models find difficult to value because they lack an ‘objective’ basis for valuation.
Technology risk became a central topic in finance, project management and electronic commerce following the 2008 crash. I have been expanding valuation modeling work in which I engaged from 2000 to 2004 (resulting in the publication of two books) to incorporate models from insurance and credit and market risk modeling.