FAKE NEWS DETECTION SYSTEM ON INSTAGRAM USING MACHINE LEARNING MULTI-MODEL METHOD

Authors

  • Laxminarayan Sahu Research Scholar, MSIT, MATS University, Raipur (C.G.)
  • Dr. Bhavana Narain Professor, MSIT, MATS University, Raipur (C.G.)

DOI:

https://doi.org/10.29121/shodhkosh.v4.i2.2023.1870

Keywords:

Fake News, Machine Learning, Instagram, Multimodal Method

Abstract [English]

The consumption of news on social media is growing in popularity. Social media appeals to users because it is low-cost, user-friendly, and rapidly disseminates information. False information does, however, also circulate, in part because to social media. It's getting harder and harder to ignore fake news because of the harm it causes to society. However, depending only on news content usually leads to poor detection effectiveness because fake news is designed to look legitimate. As such, a detailed comprehension of the relationship between social media user profiles and fake news is necessary. This study examines the use of machine learning algorithms to detect fake news. It covers significant subjects like user profiles, dataset analysis, and feature integration. The study integrates attributes to provide large feature sets. When dealing with high-dimensional datasets, Principal Component Analysis (PCA) is a helpful technique for dimensionality reduction. The study uses datasets from "Instagram," which include a variety of data processing techniques, to extensively analyze several machine learning models. The evaluation of the Random Forest classification model is further improved via curve analysis. The outcomes show the best feature and model pairings, with our model outperforming the competition.

References

Wiegand T., Sullivan G., Bjontegaard G., and Luthra A., “Overview of the h.264/avc video coding standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 560–576, 2003. DOI: https://doi.org/10.1109/TCSVT.2003.815165

Sullivan G. J., Ohm J.-R., Han W.-J., and Wiegand T., “Overview of the high efficiency video coding (hevc) standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 12, pp. 1649–1668, 2012. DOI: https://doi.org/10.1109/TCSVT.2012.2221191

Bross B., Wang Y.-K., Ye Y., Liu S., Chen J., Sullivan G. J., and Ohm J.-R.,“Overview of the versatile video coding (vvc) standard and its applications,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 10, pp. 3736–3764, 2021. DOI: https://doi.org/10.1109/TCSVT.2021.3101953

Kim H., Garrido P., Tewari A., Xu W., Thies J., Niessner M., Pe´rez P., Richardt C., Zollho¨fer M., and Theobalt C., “Deep video portraits,” vol. 37, no. 4, jul 2018. [Online]. Available: https://doi.org/10.1145/3197517.3201283

Yoo D., Kim N., Park S., Paek A. S., and Kweon I.-S., “Pixel-level domain transfer,” in ECCV, 2016. DOI: https://doi.org/10.1007/978-3-319-46484-8_31

Yang L.-C., Chou S.-Y., and Yang Y., “Midinet: A convolutional generative adversarial network for symbolic-domain music generation,” in ISMIR, 2017.

Schlegl T., Seebo¨ck P., Waldstein S. M., Schmidt-Erfurth U., and Langs G., “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery,” in IPMI, 2017. DOI: https://doi.org/10.1007/978-3-319-59050-9_12

Wu J., Zhang C., Xue T., Freeman W. T., and Tenenbaum J. B., “Learning a probabilis-tic latent space of object shapes via 3d generative-adversarial modeling,” in Proceedings of the 30th International Conference on Neural Information Processing Systems, ser NIPS’16. Red Hook, NY, USA: Curran Associates Inc., 2016, p. 82–90.

Kim H., Garrido P., Tewari A., Xu W., Thies J., Niessner M., Pe´rez P., Richardt C., Zollho¨fer M., and Theobalt C., “Deep video portraits,” ACM Trans. Graph., vol. 37, no. 4, Jul. 2018. [Online]. Available: https://doi.org/10.1145/3197517.3201283 DOI: https://doi.org/10.1145/3197517.3201283

Suwajanakorn S., Seitz S. M., and Kemelmacher-Shlizerman I., “Synthesizing obama: Learning lip sync from audio,” ACM Trans. Graph., vol. 36, no. 4, Jul. 2017. [Online]. Available: https://doi.org/10.1145/3072959.3073640 DOI: https://doi.org/10.1145/3072959.3073640

“Deepfakes github,” https://github.com/deepfakes/faceswap, accessed: 2020-02-01.

Day C., “The future of misinformation,” Computing in Science amp; Engineering, vol. 21, no. 01, pp. 108–108, jan 2019. DOI: https://doi.org/10.1109/MCSE.2018.2874117

Newson A., Almansa A., Fradet M., Gousseau Y., and Pe´rez P., “Video inpainting of complex scenes,” SIAM Journal on Imaging Sciences, vol. 7, no. 4, pp. 1993–2019, 2014. [Online]. Available: https://doi.org/10.1137/140954933 DOI: https://doi.org/10.1137/140954933

Ebdelli M., Le Meur O., and Guillemot C., “Video inpainting with short-term windows: Application to object removal and error concealment,” IEEE Transactions on Image Processing, vol. 24, no. 10, pp. 3034–3047, 2015. DOI: https://doi.org/10.1109/TIP.2015.2437193

Xu R., Li X., Zhou B., and Loy C. C., “Deep flow-guided video inpainting,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3718–3727. DOI: https://doi.org/10.1109/CVPR.2019.00384

Qadir G., Yahaya S., and Ho A., “Surrey university library for forensic analysis (sulfa) of video content,” 01 2012, pp. 1–6. DOI: https://doi.org/10.1049/cp.2012.0422

“Grip dataset,” http://www.grip.unina.it/, accessed: 2022-03-10.

Bestagini P., Milani S., Tagliasacchi M., and Tubaro S., “Codec and gop identification in double compressed videos,” IEEE Transactions on Image Processing, vol. 25, no. 5, pp. 2298–2310, May 2016. DOI: https://doi.org/10.1109/TIP.2016.2541960

Downloads

Published

2023-12-31

How to Cite

Sahu, L., & Narain, B. (2023). FAKE NEWS DETECTION SYSTEM ON INSTAGRAM USING MACHINE LEARNING MULTI-MODEL METHOD. ShodhKosh: Journal of Visual and Performing Arts, 4(2), 873–881. https://doi.org/10.29121/shodhkosh.v4.i2.2023.1870