Tag Archives: algorithm

Matching Mobile Applications for Cross Promotion (ISR 2020)

Lee, Gene Moo, Shu He, Joowon Lee, Andrew B. Whinston (2020) Matching Mobile Applications for Cross-Promotion. Information Systems Research 31(3), pp. 865-891.

  • Based on an industry collaboration with IGAWorks
  • Presented in Chicago Marketing Analytics (Chicago, IL 2013), WeB (Auckland, New Zealand 2014), Notre Dame (2015), Temple (2015), UC Irvine (2015), Indiana (2015), UT Dallas (2015), Minnesota (2015), UT Arlington (2015), Michigan State (2016), Korea Univ (2021)
  • Dissertation Paper #3
  • Research assistant: Raymond Situ

The mobile applications (apps) market is one of the most successful software markets. As the platform grows rapidly, with millions of apps and billions of users, search costs are increasing tremendously. The challenge is how app developers can target the right users with their apps and how consumers can find the apps that fit their needs. Cross-promotion, advertising a mobile app (target app) in another app (source app), is introduced as a new app-promotion framework to alleviate the issue of search costs. In this paper, we model source app user behaviors (downloads and postdownload usages) with respect to different target apps in cross-promotion campaigns. We construct a novel app similarity measure using latent Dirichlet allocation topic modeling on apps’ production descriptions and then analyze how the similarity between the source and target apps influences users’ app download and usage decisions. To estimate the model, we use a unique data set from a large-scale random matching experiment conducted by a major mobile advertising company in Korea. The empirical results show that consumers prefer more diversified apps when they are making download decisions compared with their usage decisions, which is supported by the psychology literature on people’s variety-seeking behavior. Lastly, we propose an app-matching system based on machine-learning models (on app download and usage prediction) and generalized deferred acceptance algorithms. The simulation results show that app analytics capability is essential in building accurate prediction models and in increasing ad effectiveness of cross-promotion campaigns and that, at the expense of privacy, individual user data can further improve the matching performance. This paper has implications on the trade-off between utility and privacy in the growing mobile economy.

Toward a Better Measure of Business Proximity: Topic Modeling for Industry Intelligence (MISQ 2016)

Shi, Zhan, Gene Moo Lee, Andrew B. Whinston (2016) Toward a Better Measure of Business Proximity: Topic Modeling for Industry IntelligenceMIS Quarterly 40(4), pp. 1035-1056.

In this article, we propose a new data-analytic approach to measure firms’ dyadic business proximity. Specifically, our method analyzes the unstructured texts that describe firms’ businesses using the statistical learning technique of topic modeling, and constructs a novel business proximity measure based on the output. When compared with existent methods, our approach is scalable for large datasets and provides finer granularity on quantifying firms’ positions in the spaces of product, market, and technology. We then validate our business proximity measure in the context of industry intelligence and show the measure’s effectiveness in an empirical application of analyzing mergers and acquisitions in the U.S. high technology industry. Based on the research, we also build a cloud-based information system to facilitate competitive intelligence on the high technology industry.

Improving Sketch Reconstruction Accuracy Using Linear Least Square Method (IMC 2005)

Lee, G. M., Liu, H., Yoon, Y., and Zhang, Y. (2005). Improving Sketch Reconstruction Accuracy Using Linear Least Square Method, In Proceedings of Internet Measurement Conference (IMC 2005), Berkeley, California.

  • IMC is a premier conference in the network measurement area (h5-index: 37)

Sketch is a sublinear space data structure that allows one to approximately reconstruct the value associated with any given key in an input data stream. It is the basis for answering a number of fundamental queries on data streams, such as range queries, finding quantiles, frequent items, etc. In the networking context, sketch has been applied to identifying heavy hitters and changes, which is critical for traffic monitoring, accounting, and network anomaly detection.

In this paper, we propose a novel approach called lsquare to significantly improve the reconstruction accuracy of the sketch data structure. Given a sketch and a set of keys, we estimate the values associated with these keys by constructing a linear system and finding the optimal solution for the system using linear least squares method. We use a large amount of real Internet traffic data to evaluate lsquare against countmin, the state-of-the-art sketch scheme. Our results suggest that given the same memory requirement, lsquare achieves much better reconstruction accuracy than countmin. Alternatively, given the same reconstruction accuracy, lsquare requires significantly less memory. This clearly demonstrates the effectiveness of our approach.

Lecture Notes: NP-Completeness: An Overview

Kim, Y. E. and Lee, G. M. (2003). NP-Completeness: An Overview. Lecture Notes, November 2003.

This paper presents an overview of NP-complete problems. The theory of NP-completeness is important not only in the theoretical aspect but also in reality. First, we will take a look at the formal definition and some examples of NP-complete problems. Then, we will see how to prove a problem is NP-complete and how to cope with NP-complete problems.